Professional Documents
Culture Documents
DONALD L. TURCOTTE
Cornell University
CAMBRIDGE
UNIVERSITY PRESS
CAMBRIDGE UNIVERSITY PRESS
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, S l o Paulo
Published in the United States of America by Cambridge University Press, New York
www.cambridge.org
Information on this title: www.cambridge.org/9780521561648
A catalogue record for this publication is available from the British Library
The color figures within this publication have been removed for this digital
reprinting. At the time of going to press the original images were available
in color for download from http://www.cambridge.org/9780521567336
CONTENTS
Preface
Preface to the second edition
1 Scale invariance
2 Definition of a fractal set
2.1 Deterministic fractals
2.2 Statistical fractals
2.3 Depositional sequences
2.4 Why fractal distributions?
3 Fragmentation
3.1 Background
3.2 Probability and statistics
3.3 Fragmentation data
3.4 Fragmentation models
3.5 Porosity
4 Seismicity and tectonics
4.1 Seismicity
4.2 Faults
4.3 Spatial distribution of earthquakes
4.4 Volcanic eruptions
5 Ore grade and tonnage
5.1 Ore-enrichment models
5.2 Ore-enrichment data
5.3 Petroleum data
6 Fractal clustering
6.1 Clustering
6.2 Pair-correlation techniques
6.3 Lacunarity
6.4 Multifractals
vi CONTENTS
7 Self-affine fractals
7.1 Definition of a self-affine fractal
7.2 Time series
7.3 Self-affine time series
7.4 Fractional Gaussian noises and fractional
Brownian walks
7.5 Fractional log-normal noises and walks
7.6 Rescaled-range (WS)analysis
7.7 Applications of self-affine fractals
8 Geomorphology
Drainage networks
Fractal trees
Growth models
Diffusion-limited aggregation (DLA)
Models for drainage networks
Models for erosion and deposition
Floods
Wavelets
9 Dynamical systems
9.1 Nonlinear equations
9.2 Bifurcations
10 Logistic map
10.1 Chaos
10.2 Lyapunov exponent
11 Slider-block models
12 Lorenz equations
13 Is mantle convection chaotic?
14 Rikitake dynamo
15 Renormalization group method
15.1 Renormalization
15.2 Percolation clusters
15.3 Applications to fragmentation
15.4 Applications to fault rupture
15.5 Log-periodic behavior
16 Self-organized criticality
16.1 Sand-pile models
16.2 Slider-block models
16.3 Forest-fire models
CONTENTS vii
17 Where do we stand?
References
Appendix A: Glossary of terms
Appendix B: Units and symbols
Answers to selected problems
Index
PREFACE
SCALE INVARIANCE
the number of objects larger than a specified size has a power-law depen-
dence on the size. The empirical applicability of power-law statistics to geo-
logical phenomena was recognized long before the concept of fractals was
conceived. A striking example is the Gutenberg-Richter relation for the fre-
quency-magnitude statistics of earthquakes (Gutenberg and Richter, 1954).
The proportionality factor in the relationship between the number of earth-
quakes and earthquake magnitude is known as the b-value. It has been rec-
ognized for nearly 50 years that, almost universally, b = 0.9. It is now ac-
cepted that the Gutenberg-Richter relationship is equivalent to a fractal
relationship between the number of earthquakes and the characteristic size
of the rupture; the value of the fractal dimension D is simply twice the
b-value; typically D = 1.8 for distributed seismicity.
Power-law distributions are certainly not the only statistical distribu-
tions that have been applied to geological phenomena. Other examples in-
clude the normal (Gaussian) distribution and the log-normal distribution.
However, the power-law distribution is the only distribution that does not in-
clude a characteristic length scale. Thus the power-law distribution must be
applicable to scale-invariant phenomena. If a specified number of events are
statistically independent, the central-limit theorem provides a basis for the
applicability of the Gaussian distribution. Scale invariance provides a ratio-
nal basis for the applicability of the power-law, fractal distribution. Fractal
concepts can also be applied to continuous distributions; an example is
topography. Mandelbrot (1982) has used fractal concepts to generate syn-
thetic landscapes that look remarkably similar to actual landscapes. The
fractal dimension is a measure of the roughness of the features. The earth's
topography is a composite of many competing influences. Topography is
created by tectonic processes including faulting, folding, and flexure. It is
modified and destroyed by erosion and sedimentation. There is considerable
empirical evidence that erosion is scale invariant and fractal; a river network
is a classic example of a fractal tree. Topography often appears to be com-
plex and chaotic, yet there is order in the complexity. A standard approach to
the analysis of a continuous function such as topography along a linear track
is to determine the coefficients An in a Fourier series as a function of the
wavelength An. If the amplitudes A,, have a power-law dependence on wave-
length An,a fractal distribution may result. For topography and bathymetry it
is found that, to a good approximation, the Fourier amplitudes are propor-
tional to the wavelengths. This is also true for a Brownian walk, which can
be generated by the random walk process as follows. Take a step forward
and flip a coin; if tails occurs take a step to the right and if heads occurs take
a step to the left; repeat the process. The divergence of the walk or signal in-
creases in proportion to the square root of the number of steps. A spectral
analysis of the random walk shows that the Fourier coefficients A,, are pro-
portional to wavelength A,,.
SCALE INVARIANCE 3
Many geophysical data sets have power-law spectra. These include sur-
face gravity and magnetics as well as topography. Since power-law spectra
are defined by two quantities, the amplitude and the slope, these quantities
can be used to carry out textural analyses of data sets. The fractal structure
can also be used as the basis for interpolation between tracks where data
have been obtained. A specific example is the determination of the three-
dimensional distribution of porosity in an oil reservoir from a series of well
logs from oil wells.
The philosophy of fractals has been beautifully set forth by their inven-
tor Benoit Mandelbrot (Mandelbrot, 1982). A comprehensive treatment of
fractals from the point of view of applications has been given by Feder
(1988). Vicsek (1992) has also given an extensive treatment of fractals em-
phasizing growth phenomena. Kaye (1989, 1993) covers a broad range of
fractal problems emphasizing those involving particulate matter. Korvin
(1992) has considered many fractal applications in the earth sciences.
Although fractal distributions would be useful simply as a means of
quantifying scale-invariant distributions, it is now becoming evident that
their applicability to geological problems has a more fundamental basis.
Lorenz (1963) derived a set of nonlinear differential equations that approxi-
mate thermal convection in a fluid. This set of equations was the first to be
shown to exhibit chaotic behavior. Infinitesimal variations in initial condi-
tions led to first-order differences in the solutions obtained. This is the defi-
nition of chaos. The equations are completely deterministic; however, be-
cause of the exponential sensitivity to initial conditions, the evolution of a
chaotic solution is not predictable. The evolution of the solution must be
treated statistically and the applicable statistics are often fractal. A compre-
hensive study of problems in chaos has been given by Schuster (1995).
The most universal example of chaotic behavior is fluid turbulence. It
has long been recognized that turbulent flows must be treated statistically
and that the appropriate spectral statistics are fractal. Since the flows in the
earth's core that generate the magnetic field are expected to be turbulent, it is
not surprising that they are also chaotic. The random reversals of the earth's
magnetic field are a characteristic of chaotic behavior. In fact, solutions of a
parameterized set of dynamo equations proposed by Rikitake (1958) exhib-
ited spontaneous reversals and were subsequently shown to be examples of
deterministic chaos (Cook and Roberts, 1970).
Recursion relations can also exhibit chaotic behavior. The classic exam-
ple is the logistic map studied by May (1976). This simple quadratic relation
has an amazing wealth of behavior. As the single parameter in the equation is
varied, the period of the recursive solution doubles until the solution be-
comes fully chaotic. The Lyapunov exponent is the quantitative test of
chaotic behavior; it is a measure of whether adjacent solutions converge or
diverge. If the Lyapunov exponent is positive, the adjacent solutions diverge
4 SCALE INVARIANCE
and chaotic behavior results. The logistic map and similar recursion rela-
tions are applicable to population dynamics and other ecological problems.
The logistic map also produces fractal sets.
Slider-block models have long been recognized as a simple analog for
the behavior of a fault. The block is dragged along a surface with a spring
and the friction between the surface and the block can result in the stick-slip
behavior that is characteristic of faults. Huang and Turcotte (1990a) have
shown that a pair of slider blocks exhibits chaotic behavior in a manner that
is totally analogous to the chaotic behavior of the logistic map. The two
slider blocks are attached to each other by a spring and each is attached to a
constant-velocity driver plate by another spring. As long as there is any
asymmetry in the problem, for example, nonequal block masses, chaotic be-
havior can result. This is evidence that the deformation of the crust associ-
ated with displacements on faults is chaotic and, thus, is a statistical process.
This is entirely consistent with the observation that earthquakes obey fractal
statistics.
Nonlinearity is a necessary condition for chaotic behavior. It is also a
necessary condition for scale invariance and fractal statistics. Historically
continuum mechanics has been dominated by the applications of three linear
partial differential equations. They have also provided the foundations of
geophysics. Outside the regions in which they are created, gravitational
fields, electric fields, and magnetic fields all satisfy the Laplace equation.
The wave equation provides the basis for understanding the propagation of
seismic waves. And the heat equation provides the basis for understanding
how heat is transferred within the earth. All of these equations are linear and
none generates solutions that are chaotic. Also, the solutions are not scale in-
variant unless scale-invariant boundary conditions are applied.
Two stochastic models that exhibit fractal statistics in a variety of ways
are percolation clusters (Stauffer and Aharony, 1992) and diffusion-limited
aggregation (DLA) (Vicsek, 1992). In defining a percolation cluster a two-
dimensional grid of square boxes can be considered. The probability that a
site is permeable p is specified, and there is a sudden onset of flow through
the grid at a critical value of this probability, pc = 0.59275. This is a critical
point and there are a variety of fractal scaling laws valid at and near the crit-
ical point. There is observational evidence that distributed seismicity has a
strong similarity to percolation clusters.
In generating a diffusion-limited aggregation a two-dimensional grid of
square boxes can again be considered. A seed cell is placed in one of the
boxes. Additional cells are added randomly and follow a random-walk path
from box to box until they accrete to the growing cluster of cells by entering
a box adjacent to the growing cluster. A sparse dendritic structure results be-
cause the random walkers are more likely to accrete near the tips of a cluster
rather than in the deep interior. The resulting cluster satisfies fractal statistics
SCALE INVARlANCE 5
DEFINITION OF A
FRACTAL SET
where In is a logarithm to the base e and log is a logarithm to the base 10. In
almost all applications we will require the ratio of logarithms; in this case
Figure 2.1. Illustration of
the result is the same if the logarithm to the base e (In) is used or if the loga- six one-dimensional fractal
rithm to the base 10 (log) is used. For the example considered in Figure constructions. At zero order
2.1 (a), In (N2/N,) = In 1 = 0, ln(r,lr,) = In 2, and D = 0, the Euclidean dimen- a line segment of unit length
sion of a point. This construction can be extended to higher and higher or- is considered. At first order
ders, but at each order i, i = 1,2, . . . , n, we have In (Ni+,INi)= In 1 = 0. As the line segment is divided
the order approaches infinity, n + =, the remaining line length approaches into an integer number of
equal-sized smaller segments
zero, rn + 0, becoming a point. Thus the Euclidean dimension of a point, and a fraction of these
zero, is appropriate. The construction illustrated in Figure 2.l(b) is similar segments is retained. The
except that the line segment of unit length at zero order is divided into three first-order fractal acts as a
0 order
1 st order
2 nd order
The fractal concepts applied above to a line segment can also be applied
to a square. A series of examples is given in Figure 2.3. In each case the zero-
order square is divided into nine squares at first order each with r , = f . At
second order the remaining squares are divided into nine squares each with
6,
r2 = and so forth. In Figure 2.3(a) only one square is retained, so that N , =
N2 = . . . = Nn = 1. From (2.2) D = 0,which is the Euclidean dimension of a
point; this is appropriate since as n + w the remaining square will become a
point. In Figure 2.3(b) two squares are retained at first order so that r , = 5,
4,
N , = 2 and at second order r2 = N2 = 4 . Thus from (2.2), D = In 2/ln 3 =
0.6309, the same result that was obtained from Figure 2.l(e), as expected.
Similarly, in Figure 2.3(c) three squares are retained at first order so that
r , = f , N , = 3, and at second order r2 = $, N2 = 9; thus D = In 3An 3 = 1. In the
limit n -+ = the remaining squares will become a line as in Figure 2.l(d).
The Euclidean dimension of a line is found. In Figure 2.3(d), only the center
3
square is removed; thus at first order r , = , N , = 8, and at second order r2 =
4, N2 = 64. From (2.2) we have D = In 81ln 3 = 1.8928. This construction is
known as a Sierpinski carpet. In Figure 2.3(e) all nine squares are retained;
thus at first order r , = 51 , N , = 9, and at second order r, = 9,
1
N2 = 81. From
(2.2) we have D = In 91ln 3 = 2. This is the Euclidean dimension of a square Figure 2.3. Illustration of
and is appropriate because when we retain all the blocks we continue to re- five two-dimensiona1
constructions. At zero order
a square of unit area is
considered. At first order the
unit square is divided into
nine equal-sized smaller
squares with r , = 51 and a
fraction of these squares
is retained. The first-order
fractal acts as a generator
for higher-order fractals.
Each of the retained squares
at first order is divided
into smaller sauares using -
the generator to create a
second-order fractal. The
1
first two orders with r, = 5
1
and r, = g illustrated
but the construction can be
carried to any order desired.
(a)N,=l,N,=l,D=lnlAn
3 = 0. (b) N, = 2, N, = 4, D =
In 2/ln 3 = 0.6309. (c) N, = 3,
N 2 = 9 , D = I n 3 A n 3 = l.(d)
N, = 8, N2 = 64, D = In 81111 3
= 1.8928 (known as a
Sierpinski carpet). (e) N, = 9,
N2=81,D=ln9/ln3=2.
10 DEFINITION OF A FRACTAL SET
tain the unit square at all orders. Iterative constructions can be devised to
yield any fractal dimensions between 0 and 2; again each construction is
scale invariant.
The examples for one and two dimensions given in Figures 2.1 and 2.3
can be extended to three dimensions. Two examples are given in Figure 2.4.
The Menger sponge is illustrated in Figure 2.4(a). A zero-order solid cube of
unit dimensions has square passages with dimensions r , = f cut through the
centers of the six sides. At first order six cubes in the center of each side are
removed as well as the central cube. Twenty cubes with dimensions r , = re- 3
main so that N, = 20. At second order the remaining 20 cubes have square
passages with dimensions r2 = ) cut through the centers of their six sides. In
each case the six cubes in the centers of each side are removed as well as the
center cube. Four hundred cubes with r2 = $ remain so that N2 = 400. From
(2.2) we find that D = In 201ln 3 = 2.7268. The Menger sponge can be used as
a model for flow in a porous media with a fractal distribution of porosity.
Another example of a three-dimensional fractal construction is given in Fig-
ure 2.4(b). Again the unit cube is considered at zero order. At first order it is
divided into eight equal-sized cubes with r , = $, and two diagonally opposite
comer cubes are removed so that N , = 6. At second order each of the remain-
1
ing six cubes are divided into eight equal-sized smaller cubes with r, = 2. In
each case two diagonally opposite corner cubes are removed so that N2 = 36.
From (2.2) we find that D = In 61ln 2 = 2.585. We will use this configuration
for a variety of applications in later chapters. Iterative constructions can be
Figure 2.4. Illustration of
devised to yield any fractal dimension between 0 and 3; again each construc-
two three-dimensional tion is scale invariant.
fractal constructions. (a) At The examples given above illustrate how geometrical constructions can
first order the unit cube is give noninteger, non-Euclidean dimensions. However, in each case the
divided into 27 equal-sized
smaller cubes with r, = i,
20 cubes are retained so that
N, = 20. At second order r, =
and 400 out of 729 cubes
are retained so that N, = 400;
D = In 20lln 3 = 2.727. This
construction is known as the
Menger sponge. (b) At first
order the unit cube is divided
into eight equal-sized
smaller cubes with r, = i.
Two diagonally opposite
cubes are removed so that six
cubes are retained and N, =
a
6. At second order r, = and
36 out of 64 cubes are
retained so that N, = 36;
D = In 6fln 2 = 2.585.
DEFINITION OF A FRACTAL SET 11
where ri is the side length at order i and N is the number of sides. Substitu-
tion of (2. I) gives
The triadic Koch island can be considered to be a model for measuring the
length of a rocky coastline. However, there are several fundamental differ-
ences. The primary difference is that the perimeter of the Koch island is de-
terministic and the perimeter of a coastline is statistical. The perimeter of the
Koch island is identically scale invariant at all scales. The perimeter of a
rocky coastline will be statistically different at different scales but the differ-
ences do not allow the scale to be determined. Thus a rocky coastline is a sta-
tistical fractal. A second difference between the triadic Koch island and a
rocky coastline is the range of scales over which scale invariance (fractal
behavior) extends. Although a Koch island has the maximum scale of the
origin triangle, the construction can be extended over an infinite range of
scales. A rocky coastline has both a maximum scale and a minimum scale.
The maximum scale would typically be 103 to 104 km, the size of the conti-
nent or island considered. The minimum scale would be the scale of the
grain size of the rocks, typically 1 mm. Thus the scale invariance of a rocky
coastline could extend over nine orders of magnitude. The existence of both
upper and lower bounds is a characteristic of all naturally occurring fractal
systems. In addition, the scale invariance of a coastline will be only approxi-
mately scale invariant (fractal), and there will be statistical fluctuations in
any measure of fractality. On the other hand, the triadic Koch island is ex-
actly scale invariant (fractal).
Mandelbrot (1967) introduced the concept of fractals by using (2.4) to
determine the fractal dimension of the west coast of Great Britain. The
length of the coastline Pi was determined for a range of measuring rod
lengths ri. Mandelbrot (1967) used measurements of the length of the coast-
line obtained previously by Richardson (1961). Taking a map of a coastline,
the length is obtained by using dividers of different lengths ri. Using the
scale of the map, the length of the coastline is plotted against the divider
length on log-log paper. If the data points define a straight line, the result is
a statistical fractal. The result for the west coast of Great Britain is given in
Figure 2.6. As shown, the data correlate well with (2.4), taking D = 1.25.
This is evidence that the coastline is a fractal and is statistically scale invari-
ant over this range of scales.
The technique for obtaining the fractal dimension of a coastline is easily
extended to any topography. Contour lines on a topographic map are entirely
equivalent to coastlines; the lengths along specified contours Piare obtained
using dividers of different lengths ri.The fractal relation (2.4) is generally a
good approximation and fractal dimensions can be obtained. As illustrated in
Figure 2.7, the fractal dimensions of topography using the ruler (divider)
+
method are generally in the range D = 1.20 0.05 independent of the tec-
DEFINITION OF A FRACTAL SET 13
Although the ruler (divider) method was the first used to obtain fractal
dimensions, it is not the most generally applicable method. The box-count-
ing method has a much wider range of applicability than the ruler method
(Pfeiffer and Obert, 1989). For example, it can be applied to a distribution of
points as easily as it can be applied to a continuous curve. We now use the
all power-law distributions that satisfy (2. I ) or (2.6) are fractal. In this book
we define them to be fractal. Such distributions are clearly scale invariant,
even if not directly associated with a fractal dimension. This choice elimi-
nates an ambiguity that can lead to considerable confusion when addressing
measured data sets. We will continually address this question as we consider
specific applications.
sediment supply rate is sufficient to keep the surface of the sediments at sea
level. With this assumption and a constant rate of subsidence R , the rate of
deposition of sediments is also R and the thickness of sediments is ys = Rt.
With this simple model the rate of deposition is constant, and there are no
gaps in the sedimentary record. However, it is well known that sedimentary
sequences are characterized by unconformities (bedding planes), which rep-
resent gaps in the sedimentary record. An unconformity represents a period
of time during which erosion was occurring and/or a period of time during
which no sediment was deposited.
One mechanism for generating sedimentary unconformities is to hypoth-
esize variations in sea level. We will first illustrate how harmonic variations
in sea level with time can generate gaps (unconformities) in the sedimentary
record. Our simple model is illustrated in Figure 2.12. The dashed straight
line in Figure 2.11(a) gives the thickness of sediments ys = Rt with R =
1 m d y r and no variations in sea level. After two million years, t = 2 Myr,
the thickness of sediments is ys = 2 km. Now assume that the variation in sea
level is given by
and we take ysL0 = 400 m and T, = 2 Myr. During the first 500,000 yr sea
level is rising, during the next 1,000,000 yr sea level is falling, and during
the final 500,000 yr sea level is again rising. If no sedimentation was occur-
ring, the depth of water during a cycle 7, would be given by
and this is the solid line in Figure 2.12(a). We again assume that the rate of
sedimentation is sufficiently high that the actual water depth is zero. At
t = 0 the rate of subsidence is R = 1 mrnlyr; the rate of sea level rise is 1.26
m d y r so that the rate of sediment deposition is 2.26 m d y r . The thickness
of sediments deposited follows the solid curve in Figure 2.12(a). However,
at t = 792,000 yr (point a) the rate of sea level fall becomes equal to the
rate of subsidence. For the period 792,000 < t < 1,208,000 yr (point b) sea
level is falling faster than the subsidence rate. Without erosion the previ-
ously deposited sediments would rise above sea level. We assume, how-
ever, that erosion is sufficiently rapid that the rising landscape is main-
tained at sea level. At t = 1,208,000 yr, 70 m of previously accumulated
sediments have been eroded. The result is an unconformity and a gap in the
sedimentary record. The sediments immediately below the uncomformity
were deposited at t = 577,000 yr (point c) and the sediments immediately
above the unconformity were deposited at t = 1,208,000 yr (point b), a gap
20 DEFINITION OF A FRACTAL SET
Age T, M yr
Depth
Y
km
Figure 2.13. Illustration
of a model for sediment
deposition based on a devil's
staircase associated with a
second-order Cantor set.
(a) Age of sediments T
as a function of depth y.
(b) Illustration of how the
Cantor set is used to
construct the sedimentary
pile. (c) Average rate of
deposition as a function of
the period T considered.
22 DEFINITION OF A FRACTAL SET
where L, is the thickness of sediments deposited in the period T;. The period
T;in our model is equivalent to the line segment length riin the fractal sets il-
lustrated in Figure 2. l . For the example given in Figure 2.13 we have T,, T, =
3 , T, = ~ d 9The
~ ~ 1and . thickness of sediments Li is given by the number of
segments retained at a specified order N so that
For the example given in Figure 2.13 we have N, = 2 and L, = Ld2 and N, =
4 and L, = Ld4. Noting the equivalence of our T~and ri in the fractal relation
(2. l), we can write the fractal relation
And from (2.14) we have D = 0.746. For erosion these authors found that the
rate of erosion Re is related to the interval T~ by
24 DEFINITION OF A FRACTAL SET
tics of sediment deposition. The erosional processes responsible for the for-
mation of coastlines and the depositional processes responsible for the struc-
ture of a sedimentary pile are both extremely complex. But despite the com-
plexity, both examples exhibit fractal behavior to a good approximation. A
simple explanation is that a distribution will be fractal if there is no charac-
teristic length in the problem. The fractal distribution is the only statistical
distribution that is scale invariant. However, a broad class of nonlinear phys-
ical problems involving chaotic behavior and/or self-organized critical be-
havior invariably yield fractal behavior. One objective in succeeding chap-
ters is to describe physically realistic models that generate fractal behavior.
Problems
Problem 2.1. Consider the construction illustrated in Figure 2.l(e). (a) Illus-
trate the construction at third order. (b) Determine N,, N,, r,, and r,.
Problem 2.2. Consider the construction illustrated in Figure 2.l(f). (a) Illus-
trate the construction at third order. (b) Determine N,, N,, r,, and r,.
Problem 2.3. A unit line segment is divided into five equal parts and two are
retained. The construction is repeated. (a) Illustrate the construction to
third order, i.e., consider i = 1, 2, 3. (b) Determine N,, N,, N,, r,, r,, r,.
(c) Determine the fractal dimension.
Problem 2.4. A unit line segment is divided into seven equal parts and three
are retained. The construction is repeated. (a) Illustrate this construction
to second order, i.e., consider i = 1,2. (b) Determine N,, N,, N3, r,, r2, r3.
(c) Determine the fractal dimension.
Problem 2.5. A unit line segment is divided into seven equal parts and four
are retained. The construction is repeated. (a) Illustrate this construction
to second order, i.e., consider i = 1,2. (b) Determine N,, N,, N,, r,, r2, r,.
(c) Determine the fractal dimension.
Problem 2.6. Consider the construction of the Sierpinski carpet illustrated in
Figure 2.3(d) at third order. (a) Illustrate the construction at third order.
(b) Determine N,, N,, r,, and r,.
Problem 2.7. A unit square is divided into four smaller squares of equal size.
Two diagonally opposite squares are retained and the construction is re-
peated. (a) Illustrate the construction to third order, i.e., consider i = 1, 2,
3. (b) Determine N,, N,, N,, r,, r,, r,. (c) Determine the fractal dimen-
sion.
Problem 2.8. A unit square is divided into nine smaller squares of equal size.
The center square and four corner squares are retained and the construc-
tion is repeated. This is known as a Koch snowflake. (a) Illustrate the
construction to second order, i.e., consider i = 1,2. (b) Determine N,, N,,
N,, r,, r,, r,. (c) Determine the fractal dimension.
26 DEFINITION OF A FRACTAL SET
Problem 2.9. A unit square is divided into nine smaller squares of equal size
and the four corner squares are discarded. The construction is repeated.
(a) Illustrate the construction to second order. (b) Determine N,, N,, N,,
r,, r,, r,. (c) Determine the fractal dimension.
Problem 2.10. A unit square is divided into 16 smaller squares of equal size.
The four central squares are removed and the construction is repeated.
(a) Illustrate this construction to second order, i.e., consider i = 1, 2. (b)
Determine N,, N,, N,, r,, r,, r,. (c) Determine the fractal dimension.
Problem 2.1 1. A unit square is divided into 25 smaller squares of equal size.
All squares are retained except the central one and the construction is re-
peated. (a) Illustrate this construction to second order, i.e., consider i =
1, 2. (b) Determine N,, N,, N,, r,, r,, r,. (c) Determine the fractal dimen-
sion.
Problem 2.12. A unit square is divided into 25 smaller squares of equal size.
All the squares on the boundary and the central square are retained and
the construction is repeated. (a) Illustrate this construction to second or-
der, i.e., consider i = 1, 2. (b) Determine N,, N,, N,, r,, r,, r,. (c) Deter-
mine the fractal dimension.
Problem 2.13. A unit cube is divided into 27 smaller cubes of equal volume.
All the cubes are retained except for the central one. What is the fractal
dimension?
Problem 2.14. Consider a variation on the Koch island illustrated in Figure
2.5. At zero order again consider an equilateral triangle with three sides
of unit length. At first order this triangle is enlarged so that it is an equi-
lateral triangle with sides of length three. Equilateral triangles with sides
of unit length are placed in the center of each side. (a) Illustrate this con-
struction at second order. (b) Determine the areas to second order, i.e.,
obtain A,, A,, A,. (c) Do the areas given in (b) satisfy the fractal condi-
tion (2. I)? If the answer is yes, what is the fractal dimension?
Problem 2.1 5. Consider the fractal construction illustrated in Figure 2.15. A
unit square is considered at zero order and the first-order fractal con-
struction is also illustrated. (a) Illustrate the construction at second or-
der. (b) Determine No, N,, N,, r, r,, r,, Po, P I ,P,. (c) Determine the frac-
tal dimension.
Problem 2.16. Assume that the open squares in the Sierpinski carpet illus-
trated in Figure 2.3(d) represent lakes. (a) Determine the numbers of
lakes to the third order, i.e., obtain N,, N,, N, corresponding to r,, r,, r,.
(b) Do the numbers of lakes given in (a) satisfy the fractal condition
(2. l)? If the answer is yes, what is the fractal dimension?
Problem 2.17. Zipf's law (Zipf, 1949) has been applied in a wide variety of
problems including the size distribution of cities. This law states that the
2nd largest is $ the size of the largest, the 3rd largest is $ the size of the
largest, the 4th largest is the size of the largest, and so forth. Does this
distribution satisfy a cumulative fractal distribution and, if so, what is
the fractal dimension?
Problem 2.18. Construct a second-order devil's staircase based on the fractal
construction given in Figure 2.l(f).
Problem 2.19. Consider the simple deposition model illustrated in Figure
2.11. Assume that no erosion occurs. What are the ages of the sediments
immediately above and below the resulting unconformity?
Problem 2.20. Use the second-order Cantor set based on the fractal construc-
tion given in Figure 2,l(f) as a model for sedimentation. Assume in this
model that 9 km of sediments have been deposited in 25 Myr. (a) At
what depths do the two first-order unconformities occur, and what are
the ages of the sediments just above and just below the unconformities?
(b) At what depths do the six second-order unconformities occur, and
what are the ages of the sediments just above and just below the uncon-
formities? (c) Plot the age of the sediments as a function of depth. (d)
What are the rates of deposition associated with the periods 25 Myr,
5 Myr, 1 Myr?
ChapterThree
FRAGMENTATION
3.1 Background
To illustrate how fractal distributions are applicable to real data sets, we con-
sider fragmentation. Fragmentation plays an important role in a variety of
geological phenomena. The earth's crust is fragmented by tectonic processes
involving faults, fractures, and joint sets. Rocks are further fragmented by
weathering processes. Rocks are also fragmented by explosive processes,
both natural and man made. Volcanic eruptions are an example of a natural
explosive process. Impacts produce fragmented ejecta. Although fragmenta-
tion is of considerable economic importance and many experimental, numer-
ical, and theoretical studies have been camed out on fragmentation, rela-
tively little progress has been made in developing comprehensive theories of
fragmentation. A primary reason is that fragmentation involves the initiation
and propagation of fractures. Fracture propagation is a highly nonlinear pro-
cess requiring complex models for even the simplest configuration. Frag-
mentation involves the interaction between fractures over a wide range of
scales. Fragmentation phenomena have been discussed by Grady and Kipp
(1987) and Clark (1987). If fragments are produced with a wide range of
sizes and if natural scales are not associated with either the fragmented ma-
terial or the fragmentation process, fractal distributions of number versus
size would seem to be expected. Some fractal aspects of fragmentation have
been considered by Turcotte (1986a).
discrete data and continuous data. Discrete data are generally characterized
by a set of n data points {x,,x,, . . .xi, . . . ,xn).Examples include the masses
of n fragments and the magnitudes of n earthquakes. It is standard practice to
describe the statistical properties of a discrete data set by defining the mean
and moments of deviations from the mean. The mean value of the xi, i , is
given by
The average squared deviation from the mean is a measure of the spread of
the data; this is the variance V and for a discrete set of n data points it is
given by
In many cases, each of the values x , , x,, . . . ,xi, . . . ,xn will have a prob-
ability of occurring f , , f,, . . . ,&, . . . ,fn. By definition we have
+
As an example consider flipping coins. Assign 1 to a head and - 1 to a tail.
For a single coin there are two values of x, x = + 1 (a head) and x2 = - 1 (a
tail). Since the probabilities of having a head or a tail are equal, we have f , =
f2 = 0.5. Next consider flipping two coins. We now have three values for x,
x , = + 2 (two heads), x, = 0 (one head and one tail), and x, = -2 (two tails).
However, there are two ways to obtain x, = 0, the first coin is a head and the
second a tail or the first is a tail and the second a head, whereas there is only
one way to obtain x, = + 2 (two heads) and x3 = -2 (two tails). Thus we have
f,=0.25,f2=0.5,andf3=0.25.From(3.6) t o ( 3 . 9 ) w e f i n d i = y = 0 , V = 2 ,
and a = fi.Finally consider flipping three coins. In this case we have x, = 3
(3 heads) with f,= 0.125, x, = 1 (2 heads, 1 tail) with f, = 0.375, x, = - 1
(1 head, 2 tails) with f3 = 0.375, and x, = -3 (3 tails) with f, = 0.125. From
(3.6) to (3.9) we find 2 = y = 0, V = 3, and a = fi.
We will next consider continuous data. A particular variable can take any
value over a specified range, say - = < x < =. An example would be the
x-component of the velocity of the gas molecules in a room, vx. It is appro-
priate to consider the range of velocities -= < v < -, and there will be a
X.
statistical probability that a particular molecule will have a velocity greater
than a specified value vx. In terms of our general distribution the cumulative
distribution function F (x,) is the probability Pr that x has a value greater
than x,
FRAGMENTATION 31
--
It should be noted that the usual definition of a cumulative distribution func-
tion in probability and statistics is from to x rather than x to m, i.e.,
For most applications in geology and geophysics we are concerned with the
"number" larger than a specified value and thus the definition of F(x) given
in (3.10) is preferred. The cumulative distribution function is related to the
probability distribution functionflx) by
where fix) 6 x is the probability that x lies in the range (x - 6x) < x 5
(x + 3 6 ~ )We
. also have
where m is the molecular mass, k the Boltzmann constant, and T the temper-
ature.
32 FRAGMENTATION
Introducing
x-x
Y=-
diu
(3.17) becomes
but
so that i is the mean of the normal distribution. The variance of the normal
distribution is obtained from
:v : 1
= - x f(x)dx =
1
o (2n)lD I:m [
( x - i)' exp - -
" 2-
i '-
y ]-
dx (3.20)
has been made noting that dy = dxlx andfix) dx =f(y) dy. The values of y are
normally distributed with a mean j and a standard deviation uy.Using the
definitions of the mean and standard deviation, (3.6) and (3.8), we can relate
i and a to j and uywith the result
Since both the standard deviation and the mean are positive for the log-
normal distribution, the ratio of the two quantities is a measure of the spread
of the distribution
and is known as the coefficient of variation. The coefficient of skew for the
log-normal distribution is
36 FRAGMENTATION
we obtain
The cumulative distribution functions for the log-normal and normal distri-
butions have the same forms when x is replaced by In x, i.e. (3.30).
The standard form of the normal distribution was obtained by taking i =
0 and V = 1. All normal distributions have this universal form and can be
obtained simply by rescaling. This is not the case for the log-normal distrib-
ution. The probability distribution functionsflx) for the log-normal distribu-
tion are given in Figure 3.2 for 2 = 1 and c, = 0.25,0.50, 1.00. It is seen that
the shape of the log-normal distribution changes systematically with changes
in c,. As the value of c, becomes smaller the distribution narrows and the
maximum value approaches x = 1. In the limit c, + 0 the distribution is a 6
function centered at x = 1.As c, becomes larger the distribution spreads out
and the maximum value occurs at smaller x. Whereas the normal distribution
is symmetric with a zero coefficient of skew, the asymmetry and coefficient
of skew for the log-normal distribution increase with increasing c,
Log-normal distributions are basically a one-parameter family of distri-
butions depending on the appropriate value of c,. This has important impli-
FRAGMENTATION 37
Fb) = (1 ; y)'
y20
Y a dy 1
= [ (1 + y)'+l
-
a - 1
for a > 1
This integral does not converge and a mean does not exist for a I1.The vari-
ance of the standard form of the Pareto distribution is given by
a
- for a > 2 (3.45)
(a - l)(a - 2)
This integral does not converge for a 1 2 and the variance does not exist.
The Pareto distribution is widely used in economics and is often a good
approximation for the distribution of incomes (Ijiri and Simon, 1977). The
probability distribution functions for the standard form of the Pareto distri-
bution are given in Figure 3.3 for a = 1, 2, and 3. The power-law tail of the
Pareto distribution dies off much more slowly than the tails of the normal or
log-normal distributions; this is the characteristic of fractal distributions.
For y >> 1we can write (3.43) as
many other aspects of fractal concepts. The wide applicability of scale in-
variance provides a rational basis for fractal statistics just as the central limit
theorem provides a rational basis for Gaussian statistics.
An important distinction between the cumulative Pareto distribution
(3.41) and the fractal distribution (3.47) is that the former is finite as x + 0
whereas the latter diverges to w as r + 0. Scale invariance implicitly re-
quires this divergence. Many geological and geophysical data sets also have
this divergence. As a specific example, consider earthquakes. Data on large
earthquakes are often complete, but data on small earthquakes generally do
not exist. Even the best seismic networks cannot resolve the very smallest
earthquakes that are known to occur. Thus it is impossible to define com-
plete probability or cumulative distribution functions for earthquakes. How-
ever, it is possible to determine the number of earthquakes N that have rup-
ture dimensions greater than r, and we will show in Chapter 4 that the
frequency-magnitude statistics for earthquakes are fractal and do satisfy
(3.47).
The final distribution we will consider is an exponential distribution; its
probability distribution function is given by
f(x) =
vxv-
7ex*
xo
'
I[-"):( x 20
where the power v is generally taken to be an integer. The mean of the expo-
nential distribution is given by
we obtain
where r (v') is the tabulated gamma function (Dwight, 1961, Table 1005). If
v = 2 we have r (i) = 0.886 so that 2 = 0.88%. The variance of the exponen-
tial distribution is given by
This is known as the Rosin and Rammler (1933) distribution and it is used
extensively in geostatistical applications. We can also write
FRAGMENTATION 41
1 - F(x) = 1 - exp -
[ (3'1
-
Many of the statistical distributions discussed above have been used to rep-
resent the frequency-size (mass) distributions of fragments; these include
log-normal, Pareto, Rosin and Rammler, Weibull, and power law. In terms of
the concepts developed in Chapter 2, it is clear that we would like to relate
the number of fragments N to their linear dimension r. Since fragments can
occur in a variety of shapes, it is appropriate to define a linear dimension r as
the cube root of volume, r = W 3 . Assuming constant density it follows that
-
m r3, where m is the mass of a fragment. However, it is standard practice to
give the total mass of fragments with a linear dimension r less than a speci-
fied value M (<r) or the total mass of fragments with a linear dimension r
greater than a specified value M (>r). The reason for this is that these
masses are obtained directly from a sieve or screen analysis; the mass of
fragments passing through a sieve with a specified aperture r is M (<r) and
the remaining mass is M (>r). Of course we have
This power-law mass relation can be related to the fractal number relation
When data are obtained by sieve analyses, (3.64) is used to convert mass dis-
tributions to number distributions to specify a fractal dimension.
Many experimental studies of the frequency-size distributions of frag-
ments have been carried out. Several examples of power-law fragmentation
are given in Figure 3.5. A classic study of the frequency-size distribution for
broken coal was carried out by Bennett (1936). The frequency-size distribu-
tion for the chimney rubble above the PILEDRIVER nuclear explosion in
Nevada has been given by Schoutens (1979). This was a 61 kt event at a
depth of 457 m in granite. The frequency-size distribution for fragments re-
sulting from the high-velocity impact of a projectile on basalt has been given
by Fujiwara et al. (1977). In each of the three examples a good correlation
with the fractal relation (2.6) is obtained over two to four orders of mag-
nitude. In each example the fractal dimension for the distribution is near
D = 2.5.
Further examples of power-law distributions for fragments are given in
Table 3.2. It will be seen that a great variety of fragmentation processes can
be interpreted in terms of a fractal dimension. Examples include impact shat-
Figure 3.5. Since fragments
have a variety of shapes, the
cube root of volume is an
objective measure of size.
The number N of fragments
with cube root of volume
greater than r is given as a
function of r for broken coal
(Bennett, 1936), broken
granite from a 61 kt
underground nuclear
detonation (Schoutens,
1979), and impact ejecta
due to a 2.6 km s-1
polycarbonate projectile
impacting on basalt
(Fujiwara et al., 1977). The
best-fit fractal distribution
from (3.59) is shown for
each data set.
44 FRAGMENTATION
Fractal
dimension
Object Reference D
Artificially crushed quartz Hartmann (1969) 1.89
Disaggregated gneiss Hartmann (1969) 2.13
Disaggregated granite Hartmann (1969) 2.22
FLAT TOP I (chemical
explosion, 0.2 kt) Schoutens (1979) 2.42
PILEDRIVER (nuclear
explosion, 62 kt) Schoutens (1979) 2.50
Broken coal Bennett (1936) 2.50
Asteroids Klacka (1992) 2.50
Projectile fragmentation of
quartzite Curran et al. (1977) 2.55
Projectile fragmentation of
basalt Fujiwara et al. (1977) 2.56
Fault gouge Sammis and Biegel(1989) 2.60
Sandy clays Hartmann (1969) 2.61
Soils Wu et al. (1993) 2.80
Terrace sands and gravels Hartmann (1969) 2.82
Glacial till Hartmann (1969) 2.88
Ash and pumice Hartmann (1969) 3.54
FRAGMENTATION 45
It is seen that the values of the fractal dimension vary considerably, but
most lie in the range 2 < D < 3. This range of fractal dimensions can be re-
lated to the total volume of fragments and to their surface area.
The total volume (mass) of fragments is given by
since r has been defined to be the cube root of the volume. In all cases it is
expected that there will be upper and lower limits to the validity of the frac-
tal (power-law) relation for fragmentation. The upper limit rmaxis generally
controlled by the size of the object or region that is being fragmented. The
lower limit rminis likely to be controlled by the scale of the heterogeneities
responsible for fragmentation, for example the grain size. For a power-law
(fractal) distribution of sizes, substitution of (3.61) into (3.65) and integra-
tion gives
If 0 < D < 3 it is necessary to specify rmax but not rminto obtain a finite vol-
ume (mass) of fragments. The volume (mass) of fragments is predominantly
in the largest fragments. This is the case for most observed distributions of
fragments (see Table 3.2). If D > 3 it is necessary to specify rminbut not rmax.
The volume (mass) of the small fragments dominates.
The total surface area A of the fragments is given by
If 0 < D < 2 it is necessary to specify rmaxbut not rminto obtain a finite total
surface area for the fragments. But if D > 2 it is necessary to specify rminto
constrain the total surface area to a finite value. Thus for most observed dis-
tributions of fragments (see Table 3.2) the surface area of the smallest frag-
ments dominates.
46 FRAGMENTATION
where Vo is the volume of the zero-order cells. The probability that a zero-
order cell will fragment to produce eight zero-order elements is taken to be f.
The number of zero-order elements produced by fragmentation is
After fragmentation the number of zero-order cells that have not been frag-
mented, No,, is given by
for these smaller cubes. The problem is renormalized and the cubes with di-
mension h/2 are treated in exactly the same way that the cubes with linear di-
mension h were treated above. Each of the fragmented cubic elements with
linear dimension h/2 is taken to be a first-order cell; each of these cells is di-
vided into eight first-order cubic elements with linear dimensions h/4 as il-
lustrated in Figure 3.6. The volume of each first-order element is
After fragmentation the number of first-order cells that have not been frag-
mented is
Taking the natural logarithm of both sides we can write (3.75) and (3.76) as
agreement with the fractal relation (2.6) is obtained taking D = 2.60. Thus
the fractal dimensions for the discrete set and the cumulative statistics are
nearly equal.
This comminution model was originally developed for fault gouge. The
derived fractal dimension for the model D = 2.60 is in excellent agreement
with the measured values for fault gouge described in the last section. It is
seen from Figure 3.5 and Table 3.2 that many observed distributions of frag-
ments have fractal distributions near this value. This is evidence that the
comminution model may be widely applicable to rock fragmentation. This
model may also be applicable to tectonic zones in the earth's crust. The im-
plication is that there is a fractal distribution of tectonic blocks over a wide
range of scales.
A number of other models have been proposed to explain fractal frag-
mentation. Steacy and Sammis (1991) developed an automaton that modeled
nearest neighbor fragmentation. Palmer and Sanderson (1991) developed a
model for crushing ice that accounts for the relative size of contacting frag-
ments. In their model, D = 2.5 has the special significance that fragments of
all sizes make equal contributions to the crushing force. Englman et al.
(1987, 1988) have obtained a power-law distribution utilizing a maximum-
entropy model.
3.5 Porosity
Most rock has a natural porosity. This porosity often provides the necessary
permeability for fluid flow. There are generally two types of porosity, inter-
granular porosity and fracture porosity. Based on the discussion given above
it would not be surprising if both types of porosity exhibited fractal behav-
ior. Fractures are directly related to fragmentation, and detridal rocks are
composed of rock grains with a variety of scales. Based on laboratory stud-
ies a number of authors have suggested that sandstones have a fractal distri-
bution of porosity (Katz and Thompson, 1985; Krohn and Thompson, 1986;
Daccord and Lenormand, 1987; Krohn, 1988a, b; Thompson et al., 1987).
Hansen and Skjeltorp (1988) carried out two-dimensional box counting of
the pore space in a sandstone and found D = 1.73. Brakensiek et al. (1992)
carried out similar studies of the two-dimensional porosity of soils and
found D = 1.8. Soils can be considered both in terms of fractal distributions
of particle sizes and in terms of fractal distributions of void spaces (Rieu and
Sposito, 1991a, b; 5 l e r and Wheatcraft, 1992). Fractal distributions of
voids have also been suggested to be applicable to caves (Curl, 1986), karst
regions (Laverty, 1987), and sinkholes (Reams, 1992).
We previously introduced models with scale-invariant porosity in Figure
2.4. The Menger sponge, Figure 2.4(a), can be taken as a simple model for a
FRAGMENTATION 51
This is a fractal relation and is illustrated in Figure 3.9. For the Menger
sponge the fractal dimension is D = In 201111 3 = 2.727. Generalizing (3.81),
the porosity 4 for a fractal medium can be related to its fractal dimension by
where r is the linear dimension of the sample considered. Similarly, the den-
sity of the fractal medium scales with its size according to
Problems
Assume that the maximum fragment size is r, and that v > 1. Determine
the mean fragment radius i and the variance V about this mean.
Problem 3.15. Consider a bar of unit length that has a probability f2 of being
fragmented into two bars of equal length i. The two smaller bars have
54 FRAGMENTATION
the same probability of being fragmented into bars of length a. Show that
this process leads to a fractal distribution with
SEISMICITY
AND TECTONICS
4.1. Seismicity
Although the crustal deformation in the western United States may ap-
pear to be complex, it does obey fractal statistics in a variety of ways. This is
true of all zones of tectonic deformation. We will first consider the fre-
quency-magnitude statistics of earthquakes. Several quantities can be used
to specify the size of an earthquake; these include the strain associated with
the earthquake and the radiated seismic energy. However, for historical rea-
sons the most commonly used measure of earthquake size is its magnitude.
Unfortunately, a variety of different magnitude scales have been proposed;
but to a first approximation, the magnitude is the logarithm of the energy ra-
diated and dissipated in an earthquake. Typically great earthquakes have a
magnitude m = 8 or larger. The 1992 Landers earthquake had a magnitude
m = 7.6 and was the largest earthquake in California since the great 1906 San
Francisco earthquake. The 1989 Loma Prieta earthquake had m = 7.1 and the
1994 Northridge earthquake had m = 6.6.
Many regions of the world have dense seismic networks that can moni-
tor earthquakes as small as magnitude two or less. The global seismic net-
work is capable of monitoring earthquakes that occur anywhere in the world
with a magnitude greater than about four. Various statistical correlations
have been used to relate the frequency of occurrence of earthquakes to their
magnitude, but the most generally accepted is the log-linear relation (Guten-
berg and Richter, 1954)
where b and a are constants, the logarithm is to the base 10, and N is the
number of earthquakes per unit time with a magnitude greater than m occur-
ring in a specified area. The Gutenberg-Richter law (4.1) is often written in
terms of N, the number of earthquakes in a specified time interval (say 30
years), and the corresponding constant a.
The magnitude scale was originally defined in terms of the amplitude of
ground motions at a specified distance from an earthquake. Typically the
surface wave magnitude was based on the motions generated by surface
waves (Love and Rayleigh waves) with a 20-s period, and the body wave
magnitude was based on the motions generated by body waves (P and S
waves) having periods of 6.8 seconds. The magnitude scale became a popu-
lar measure of the strength of earthquakes because of the logarithmic basis,
which allows essentially all earthquakes to be classified on a scale of 0-10.
Alternative magnitude definitions include the local magnitude and the mag-
nitude determined from the earthquake moment.
The frequency-magnitude relation (4.1) is found to be applicable over a
wide range of earthquake sizes both globally and locally. The constant b or
"b-value" varies from region to region but is generally in the range 0.8 <
b < 1.2 (Frohlich and Davis, 1993). The constant a is a measure of the re-
gional level of seismicity.
58 SEISMICITY AND TECTONICS
where is the shear modulus of the rock in which the fault is embedded, A is
the area of the fault break, and 6e is the mean displacement across the fault
during the earthquake. The moment of an earthquake can be related to its
magnitude by
where c and d are constants. Kanamori and Anderson (1975) have estab-
lished a theoretical basis for taking c = 1.5. Kanamori (1978) and Hanks and
Kanamori (1979) have argued that (4.4) can be taken as a definition of mag-
nitude with c = 1.5 and d = 9.1 (M in joules). This definition is consistent
with the definitions of local magnitude and surface wave magnitude but not
with the definition of body wave magnitude. It is standard practice today to
use long-period (50-200 s) body and/or surface waves to directly determine
the scalar moment M and (4.4) is used to obtain a moment magnitude.
Kanamori and Anderson (1975) have also shown that it is a good ap-
proximation to relate the moment of an earthquake to the area A of the rup-
ture by
with
bd b
log@=-
C
+ loga- -1oga
C
SEISMICITY AND TECTONICS 59
In a specified region the number of earthquakes N per unit time with rupture
areas greater than A has a power-law dependence on the area. A comparison
-
with the definition of a fractal given in (2.6) with A r2 shows that the frac-
tal dimension of distributed seismicity is
where the integral is camed out over the entire distribution of seismicity and
&is the number of earthquakes per unit time with magnitudes between rn
and rn + dm. The earthquake moment has been introduced from (4.3). We
hypothesize that a fractal distribution of seismicity accommodates this rela-
tive velocity. From (4.1) and (4.4) we have
and
Since c > b the integral diverges for large rn so that the maximum-magni-
tude earthquake mmaxmust be specified. This is the well-known observation
that a large fraction of the total moment and energy associated with seismic-
ity occurs in the largest events. Integration of (4.15) gives
with (4. l), taking b = 0.923 (D = 1.846) and a = 1.4 X 105 yr-1. In terms of
the linear dimension of the fault rupture, this magnitude range corresponds
to a linear size range 0.7 < All2 < 40 km.
Also included in Figure 4.2 is the value of N associated with great earth-
quakes on the southern section of the San Andreas fault. Dates for 10 large
earthquakes on this section of the fault have been obtained from radiocarbon
dating of faults, folds, and liquifaction features within the marsh and stream
deposits on Pallett Creek where it crosses the San Andreas fault 55 km north-
east of Los Angeles (Sieh et al., 1989). In addition to historical great earth-
quakes on January 9, 1857, and December 8, 1812, additional great earth-
+
quakes were estimated to have occurred in 1480 15, 1346 2 17, 1100 +
+ +
65, 1048 33,997 t 16,797 22,734 2 13, and 67 1 2 13. The mean re-
peat time is 132 years, giving N = 0.0076 yr-1. The most recent in the se-
quence of earthquakes occurred in 1857, and the observed offset across the
fault associated with this earthquake was 12 m (Sieh and Jahns, 1984). Sieh
(1978) estimates that the magnitude of the 1857 earthquake was m = 8.25.
Taking the values given above, the recurrence statistics for these large earth-
quakes are shown by the solid circle in Figure 4.2. An extrapolation of the
fractal relation for regional seismicity appears to make a reasonable predic-
tion of great earthquakes on this section of the San Andreas fault. Since this
extrapolation is based on the 40 years of data between 1932 and 1972, a rel-
atively large fraction of the main interval of 132 years, it suggests that the
value of a for this region may not have a strong dependence on time during
the earthquake cycle. This conclusion has a number of important implica-
tions. If a great earthquake substantially relieved the regional stress, then it
would be expected that the regional seismicity would systematically in-
crease as the stress increased before the next great earthquake. An alterna-
tive hypothesis is that an active tectonic zone is continuously in a critical
state and that the fractal frequency-magnitude statistics are evidence for this
critical behavior. In the critical state the background seismicity, small earth-
quakes not associated with aftershocks, have little time dependence. This
hypothesis will be discussed in Chapter 16. Acceptance of this hypothesis al-
lows the regional background seismicity to be used in assessing seismic haz-
ards (Turcotte, 1989b). The regional frequency-magnitude statistics can be
extrapolated to estimate recurrence times for larger magnitude earthquakes.
Unfortunately, no information is provided on the largest earthquake to be
expected.
An important question in seismology is whether the occurrence of large
plate-boundary earthquakes can be estimated by extrapolating the regional
seismicity as was done above for southern California. This is a subject of
considerable controversy. Some authors argue that the large earthquakes oc-
cur more often than would be predicted by an extrapolation.
To further consider the time dependence of regional seismicity (the time
dependence of a), we consider the frequency-magnitude statistics of the re-
gional seismicity in southern California on a yearly basis. Again the number
of earthquakes N in each year between 1980 and 1994 with magnitudes
greater than m are given in Figure 4.3 as a function of m. In general there is
good agreement with (4. I), taking b = 1.05 and a = 2.06 X 105 yr-1. The ex-
ceptions can be attributed to the aftershock sequences of the Whittier (1987),
Landers (1992), and Northridge (1994) earthquakes. Comparing the correla-
tion lines in Figures 4.2 and 4.3 shows that the correlation line in Figure 4.2
lies somewhat above those in Figure 4.3. This is because the data given in
Figure 4.2 include aftershocks. With aftershocks removed, the near uni-
formity of the background seismicity in southern California illustrated in
Figure 4.3 is clearly striking. This is strongly suggestive of a thermodynamic
behavior. We will return to this point in Chapter 16.
We now relate the seismicity in southern California to the relative veloc-
ity across the plate boundary. The data given in Figure 4.2 can be used to
predict the regional strain using (4.16). Substituting p. = 3 X 1010 Pa, b =
0.89, c = 1.5,d=9.1, v = 4 8 mmyr-',mmax=8.05, a n d a = 1.4 X lO5yr-1 we
find from (4.16) that Ap = 1.5 X lo4 km2. Taking the depth of the seismo-
genic zone to be 15 km, the length of the seismogenic zone corresponding to
64 SEISMICITY AND TECTONICS
this area is 730 km. This is about a factor of two larger than the actual length
of the San Andreas fault in southern California. This is reasonably good
agreement considering the uncertainties in the parameters. However, there
are two other factors that can contribute to this discrepancy.
Since the eastern United States is a plate interior, the concept of rigid plates
would preclude seismicity in the region. However, the plates act as stress
guides. The forces that drive plate tectonics are applied at plate boundaries.
The negative buoyancy force acting on the descending plate at a subduction
zone acts as a "trench pull." Gravitational sliding off an ocean ridge acts as a
"ridge push." Because the plates are essentially rigid, these forces are trans-
mitted through their interiors. However, the plates have zones of weakness
that will deform under these forces and earthquakes result. Thus earthquakes
occur within the interior of the surface plates of plate tectonics, although the
frequencies of occurrence are much lower than at plate boundaries. An ex-
ample was the three great earthquakes that occurred in the Memphis-
St. Louis (New Madrid, Missouri) seismic zone during the winter of 1811-
1812. Nuttli (1983), based on historical records, has estimated that the sur-
face wave magnitudes of these earthquakes were 8.5, 8.4, and 8.8, respec-
tively. This area remains the most active seismic zone in the United States
east of the Rocky Mountains. Based on both instrumental and historical
records Johnston and Nava (1985) have given the frequency-magnitude sta-
tistics for earthquakes in this area for the period 1816-1983. Their results are
given in Figure 4.4. The data correlate well with (4.1), taking b = 0.90 (D =
1.80) and a = 2.24 X 103 yr-1. Comparing the data in Figure 4.4 with the
data in Figure 4.2 indicates that the probability of having a moderate-sized
earthquake in the Memphis-St. Louis seismic zone is about 1/50 of the prob-
ability in southern California. Assuming that it is valid to extrapolate the
data in Figure 4.4 to larger earthquakes, a magnitude m = 8 would have a re-
currence time of about 7000 yr.
Although there is certainly a significant range of errors, the results given
above indicate that the measured frequency-magnitude statistics associated
with the Gutenberg-Richter frequency-magnitude relation (4.1) can be used
to assess seismic hazards. The regional b(D) and a values can be used to esti-
mate recurrence times for earthquakes of various magnitudes.
4.2 Faults
There are two end-member models that give fractal distributions of earth-
quakes. The first is that there is a fractal distribution of faults and each fault
has its own characteristic earthquake. The second is that each fault has a
fractal distribution of earthquakes. Observations strongly favor the first hy-
pothesis. On the northern and southern locked sections of the San Andreas
fault, there is no evidence for a fractal distribution of earthquakes. Great
earthquakes and their associated aftershock sequences occur, but between
great earthquakes seismicity is essentially confined to secondary faults.
A similar statement can be made about the Parkfield section of the San
Andreas fault, where moderate-sized earthquakes occurred in 1881, 1901,
1924, 1934, and 1966. There is no evidence for a fractal distribution of
events on this section of the San Andreas fault. We therefore conclude that a
reasonable working hypothesis is that each fault has a characteristic earth-
quake and a fractal distribution of earthquakes implies a fractal distribution
of faults.
Although we can conclude that the frequency-size distribution of faults
is fractal, the fractal dimension is not necessarily the same as that for earth-
quakes. Equal fractal dimensions would imply that the interval of time be-
tween earthquakes is independent of scale. This need not be the case. Tec-
tonic models for a fractal distribution of faults have been proposed by King
(1983, 1986), Turcotte (1986b), King et al. (1988), and Hirata (1989a). Frac-
tal distributions of faults that give well-defined b-values have been proposed
by Huang and Turcotte (1988) and Hirata (1989b).
Before discussing the observational data on spatial distributions of
faults, we will discuss the definitions of faults, joints, and fractures. Frac-
tures are generally any crack in a rock. If there is a lateral offset across the
fracture, it is a fault; if there is no lateral offset, it is a joint. Because of the
grinding (comminution) effect of creating offsets on faults during earth-
quakes, a zone of brecciated rock (fault gouge) generally develops on the
fault. The larger the total offset on the fault, the wider the disrupted zone.
It is generally difficult to quantify the frequency-size distributions of
faults. This is because the surface exposure is generally limited. Many faults
are not recognized until earthquakes occur on them. Coal mining areas pro-
vide access to faults and fractures at depth. The cumulative distributions of
the number of faults N with lengths greater than r are given in Figure 4.5 for
two coal mining areas (Villemin et al., 1995). Correlations with the fractal
relation (2.6) are given with D = 1.6. Other compilations of the number-
length statistics of faults and comparisons with power-law correlations have
been given by Gudmundson (1987), Hirata (1989a), and Main et al. (1990).
Hirata et al. (1987) and Velde et al. (1993) found a fractal distribution of mi-
crofractures in laboratory experiments that stressed unfractured granite.
68 SEISMICITY AND TECTONICS
ments that include points (fractures) N(r) is determined and log N(r) is plot-
ted against log r(or log llr). If a linear or near-linear dependence is found,
the slope gives the fractal dimension using (2.2).
Barton (1995) analyzed the distribution of gold-bearing, quartz-filled
fractures (veins) intersecting exploratory drill holes from tunnels in the Per-
severance Mine, Juneau, Alaska. His results for core hole 7- 18, using the one-
dimensional box-counting technique, are given in Figure 4.8. A good correla-
tion with the fractal relation (2.2) is obtained taking D = 0.59. For the 23 drill
holes studied by Barton (1995), good correlations with fractal statistics were
obtained, with D ranging from 0.41 to 0.62. Similar studies have been carried
out by La Pointe (1988). Velde et al. (1990, 1991), Ledesert et al. (1993),
Manning (1994), Boadu and Long (1994a, b), and Magde et al. (1995).
It should be emphasized that a wide variety of mechanisms are responsi-
ble for the formation of joints and faults and not all would be expected to
yield fractal distributions. Limitations of the fractal approach have been dis-
cussed by Harris et al. (1991) and Gillespie et al. (1993).
The fractal model for fragmentation illustrated in Figure 3.7 can also be
applied to tectonic fragmentation (Sammis et al., 1987). As the surface plates
of plate tectonics evolve in time, geometrical incompatibilities develop
(Dewey, 1975). Simple plate boundaries consisting of ocean ridges, subduction
zones, and transform faults cannot evolve in time without overlaps or holes de-
veloping. The result is that plate interiors must deform to accommodate the
geometrical incompatibilities. Because of the weaker silicic rocks of the conti-
nental crust, and the many ancient faults pervading the continental lithosphere,
continental parts of surface plates deform much more readily than oceanic
parts. This can be easily seen at the boundary between the Pacific and North
American plates in the western United States. The adjacent continental North
American plate consisting of the western states deforms extensively whereas
there is little internal deformation in the adjacent oceanic Pacific plate.
Just as the comminution model can be applied to fragmentation, it can
also be applied to the deformation of the continental crust. The tectonic
forces break the continental crust into a fractal distribution of interacting
crustal blocks over a wide range of scales. The crustal blocks are bounded by
faults so that a fractal distribution of block sizes can be related to a fractal
distribution of faults. To illustrate this we consider the deterministic com-
minution model for fragmentation given in Figure 3.7. To fragment the sin-
gle zero-order block of size h requires three orthogonal faults (No = 3) of size
r, = h; the result is eight blocks of size hl2. Six of these eight blocks are fur-
ther fragmented; this requires N, = 3 X 6 = 18 faults of size r, = hl2. The re-
sult is 48 blocks of size hl4; 36 of these 48 blocks are further fragmented to
we see that the fractal dimension of the cross section, D2 = In 3lln 2, is re-
lated to the fractal dimension of the original construction from Figure 3.3.,
D,= In 6 t h 2, by
Similarly we assume that for earthquakes De= 2 so that the number of earth-
quakes per unit time, in a given area, with a characteristic rupture size
greater than r scales with r according to
Walsh and Wattersen (1988) and Marrett and Allmendinger (1991) have
compiled measurements of the dependence of total displacement on a fault 6
as a function of fault length r and concluded that there is a power-law (frac-
tal scaling). The results obtained by Marrett and Allmendinger (1991) are
given in Figure 4.11. Data from a wide variety of tectonic environments are
included. Although there is considerable scatter, a reasonably good correla-
tion with (4.26) is found. It should be emphasized that this correlation must
be to some extent fortuitous since T, is unlikely to be a constant in different
tectonic settings. Also, Scholz and Cowie (1990), Cowie and Scholz (1992a,
b), Scholz et al. (1993), and Dawers et al. (1993) conclude that 6 - r with an
additional parameter, the critical shear stress for fault propagation. These au-
thors correlate fault displacement and length in individual tectonic environ-
ments and find for each environment a reasonably good correlation with 6 r. -
They argue that it is misleading to include data from a variety of tectonic en-
vironments. In addition, Gillespie et al. (1992) find a universal power-law
correlation between fault width and total displacement. Jackson and Sander-
son (1992) and Pickering et al. (1994) have concluded that in several exam-
ples the number of faults with displacements greater than a specified value
satisfy fractal statistics with D = 0.7-1.4.
Sedimentary basins are often formed by horizontal extension on suites
of normal faults. The horizontal extension thins the continental crust, result-
ing in the subsidence of the surface and the deposition of a sedimentary pile
on the subsiding "basement." A common observation is that the amount of
extension associated with "visible" normal faults (for example, on seismic
reflection profiles) is significantly less than the amount of extension associ-
ated with the observed crustal thinning. Typically only 40-70% of the re-
quired extension can be associated with the larger faults on which displace-
ments can be determined.
Using a fractal distribution for the number of faults as a function of size and
the displacements of these faults as described above, the displacements on small
unobserved faults can be determined from the displacements on the larger faults.
Walsh et al. (1991) and Marrett and Allmendinger (1992) have argued that this
approach can explain the discrepancy. The total strain E in a volume V, is related
to the number of faults N,, fault area A,, and total fault displacement 6 by
If these statistics are valid in a region, the larger faults dominate in terms of
regional strain, but the smaller faults do make a significant contribution.
The box-counting method in three dimensions has been applied to the spatial
distribution of earthquake aftershocks by Robertson et al. (1995). After-
shocks are particularly well located because extensive arrays of seismome-
ters have been deployed following the main shock. These authors considered
the aftershock sequence of the m = 6.1 Joshua Tree earthquake of April 23,
1992 (2600 events in a 20 X 20 X 19 km volume in 160 days), and the after-
SEISMICITY AND TECTONICS 77
shock sequence of the m = 6.2 Big Bear earthquake of June 28, 1992 (818
events in a 20 X 20 X 17 km volume in 375 days). The spatial distributions
of these aftershocks are given in Figure 4.12(a, b).
The numbers of cubes occupied by one or more earthquakes are given as
a function of the cube size in Figure 4.12(c) for the two aftershock se-
quences; cubes with linear dimensions between 500 m and 20 km were used.
The data are in quite good agreement with (2.2) taking D = 2. A fractal di-
mension of two would be expected if the earthquakes lie on a plane; how-
ever, there is considerable three-dimensional structure to the aftershock se-
quences. This led Robertson et al. (1995) to suggest that the earthquakes
form the "backbone" of a percolation cluster. The "backbone" of a three-
dimensional percolation cluster has a fractal dimension near two. A detailed
discussion of percolation clusters and the meaning of the "backbone" will be
given in Chapter 15. The box-counting technique has been applied to both
the temporal and two-dimensional spatial distribution of earthquakes in
Japan by Bodri (1993).
4.4Volcanic eruptions
ferent ways. Some eruptions produce primarily magma (liquid rock) while
others produce primarily tephra (ash). Utilizing the volume of tephra as a
measure of size McClelland et al. (1989) have published frequency-vol-
ume statistics for volcanic eruptions. Their results for eruptions during the
period 1975-1985 and for historic (last 200 years) eruptions are given in
Figure 4.13. The number of eruptions with a volume of tephra greater than
a specified value is given as a function of the volume. A reasonably good
correlation is obtained with the fractal relation (2.6) by taking D = 2.14. It
appears that volcanic eruptions are scale invariant over a significant range
of sizes.
A single volcano can produce eruptions with a wide spectrum of sizes.
Also, volcanoes have a wide spectrum of sizes. The circumstances that de-
termine the volume of tephra in an eruption are poorly understood. Thus
models that would provide an explanation of the observed value of D are not
available.
Problems
ORE GRADE
ANDTONNAGE
Statistical treatments of ore grade and tonnage for economic ore deposits
have provided a basis for estimating ore reserves. The objective is to deter-
mine the tonnage of ore with grades above a specified value. The grade is de-
fined as the ratio of the mass of the mineral extracted to the mass of the ore.
Evaluations can be made on either a global or a regional basis. Much of the
original work on this problem was carried out by Lasky (1950). He argued
that ore grade and tonnage obey log-normal distributions.
Other authors, however, have suggested that a linear relation is obtained
if the logarithm of the tonnage of ore with grades above a specified value is
plotted against the logarithm of the grade. The latter is a fractal relation. A
fractal relation would be expected if the concentration mechanism is scale
invariant. Many different mechanisms are responsible for the concentrations
of minerals that lead to economic ore deposits. Probably the most widely ap-
plicable mechanisms are associated with hydrothermal circulations.
We first consider two simple models that illustrate the log-normal and
power-law distributions for tonnage versus grade. De Wijs (1951, 1953) pro-
posed the model for mineral concentration that is illustrated in Figure 5.l(a).
In this model an original mass of rock Mo is divided into two equal parts
each with a mass M , = M d 2 . The original mass of the rock has a mean min-
eral concentration Co, which is the ratio of the mass of mineral to mass of
rock. As in Chapter 3 we refer to this mass as a zero-order cell. It is hypothe-
sized that the mineral is concentrated into one of the two zero-order ele-
ments such that one element is enriched and the other element is depleted.
The zero-order elements then become first-order cells, each of which is di-
vided into two first-order elements with mass M , = M d 4 .
The mean mineral concentration in the enriched zero-order element C,,
is given by
82 ORE GRADE AND TONNAGE
where 4, is the enrichment factor. The first subscript on C refers to the order
of cell being considered. The second subscript refers to the amount of en-
richment: the lower the number the more the enrichment and the higher the
concentration. The subscript on the enrichment factor refers to the fact that
each cell is divided into two equal elements; the enrichment factor 4, is
greater than unity since C,, must be greater than C,,.A simple mass balance
shows that the concentration in the depleted zero-order element is
The enrichment factor must be in the range 1 < 4, < 2. This model is illus-
trated in Figure 5.l(a). The process of concentration is then repeated at the
next order as illustrated in Figure 5.l(a). The zero-order elements become
first-order cells and each cell is again divided into two elements of equal
mass M, = Md4. The mineral is again concentrated by the same ratio into
each first-order element. The enriched first-order element in the enriched
first-order cell has a concentration
The depleted first-order element of the enriched first-order cell and the en-
riched first-order element of the depleted first-order cell both have the same
concentrations:
The depleted first-order element of the depleted first-order cell has a concen-
tration
This result is also illustrated in Figure 5.l(a) along with two higher-order
cells. This model gives a binomial distribution of ore grades, and in the limit
of infinite order reduces to the log-normal distribution given in (3.29). The
resulting distribution is not scale invariant; the reason is that the results are
dependent on the size of the initial mass of ore chosen and this mass enters
into the tonnage-grade relation. We will show in Chapter 6 that the resulting
distribution is a multifractal.
Cargill et al. (1980, 1981) and Cargill (1981) disagreed with the loga-
rithmic dependence and suggested that a linear relationship is obtained if the
logarithm of the tonnage is plotted against the logarithm of the mean grade.
A simple model that gives this result was proposed by Turcotte ( 1 9 8 6 ~ and
)
is illustrated in Figure 5.l(b). This model follows very closely the model dis-
cussed above. Again, an original mass of rock M, is divided into two parts
each with a mass M, = M,/2, and it is hypothesized that the mineral is con-
centrated into one of the two zero-order elements so that (5.1) and (5.2) are
applicable. However, at the next step only the enriched element is further
fractionated. The problem is renormalized so that the enriched element is
treated in exactly the same way at every scale (order). This results in a frac-
tal (scale-invariant) distribution. The concentration of ore into one or the
two elements in the enriched first-order cell results in the concentrations
given by (5.3) and (5.4). However, the depleted first-order cell continues to
84 ORE GRADE AND TONNAGE
where C,, is the mean ore grade associated with the mass
and
-
With the density assumed to be constant, M r3, where r is the linear dimen-
sion of the ore deposit considered, and we have
Since the allowed range for 4, is 1 < 4, < 2, the allowed range for the frac-
tal dimension is 0 < D < 3. To be fractal the distribution must be scale in-
variant. The scale invariance is clearly illustrated in Figure 5.l(b). The con-
ORE GRADE AND TONNAGE 85
centration of ore could be started at any order and the same result would be
obtained. The left half at order two looks like order one, the left half at order
three looks like order two, etc. This is not true for the distribution illustrated
in Figure 5.1 (a).
We now generalize this model so that the original mass of rock is di-
vided into two parts, but the masses of the two parts are not equal. The mass
of the enriched element M , , is related to the original mass Mo by
The mass ratio a can take on the range of values 1 < a < m. The concentra-
tion ratio is defined as before and is the ratio of the concentration in the en-
riched element C , , to the reference concentration C,.
The enriched zero-order element becomes a first-order cell; this cell is di-
vided into two parts with the enriched part having a mass
M21= 7- M0
Mll
a2
and
and
+,
It is clear from (5.25) that depends upon a.It is easy to show that this is
reasonable. The case a = 2 was considered above. For a = 8 we have from
(5.25)
We now show that (5.26) is entirely equivalent to (5.12). The first-order con-
centration into one-eighth of the original mass, $, must be equivalent to
three orders of the concentration into one-half the original mass, 4,. Thus we
can write
ORE GRADE AND TONNAGE 87
It follows that
where M is the mass of magma, Mm is the mass of the mineral in the magma,
and KR is the solid-liquid partition coefficient. If KR < 1 the remaining
magma is systematically enriched. The allowed range of values for the
88 ORE GRADE AND TONNAGE
The concentration of the mineral in the enriched residual magma Cmand the
concentration of the mineral in the original magma Cmoare given by
and
c$a=a+(l -a)KR
And the substitution of (5.42) into (5.24) gives (5.34). In the limit a + 1 the
fractal model is identical to Rayleigh distillation. Furthermore, the substitu-
tion of (5.42) into (5.25) gives
90 ORE GRADE AND TONNAGE
The fractal dimension of the ore deposit is simply related to the solid-liquid
partition coefficient of the Rayleigh distillation process. For the allowed
range of values for KR,0 I KR I 1, the allowed range for D is 0 ID I 3. In
the limit KR + 1 there is little enrichment and D + 0; in the limit KR + 0
there is very strong enrichment and D + 3.
In each of the enrichment steps in our fractal model the concentration C,, is
the mean concentration in the mass of ore M,,. For applications to actual ore
deposits we generalize the fractal relation between ore grade and tonnage to
where M is the mass of the highest grade ores, which have a mean concentra-
tion c. The reference mass Mo is the mass of rock from which the ore was
derived, which has a mean concentration Co.
As in the previous examples of naturally occurring fractal distributions,
there are limits to the applicability of (5.44). The lower limit on the ore grade
is clearly the regional background grade C , that has been concentrated.
However, there is also an upper limit: the grade C cannot exceed unity,
which corresponds to pure mineral.
The entire subject of tonnage-grade relations has been reviewed by Har-
ris (1984). There is clearly a controversy in the literature between Lasky's
law, which gives a log-normal dependence of tonnage on grade, and the
power-law or fractal dependence. Lasky (1950) and Musgrove (1965) have
argued in favor of the log-normal relation. On the other hand, Cargill et al.
(1980, 1981) and Cargill (1981) have argued in favor of the power-law de-
pendence. These authors based their analyses on records of annual produc-
tion and mean grade. Their results for mercury production in the United
States are given in Figure 5.2. The cumulative tonnage of mercury mined
prior to a specified date is divided by the cumulative tonnage of ore from
which the mercury was obtained to give the cumulative average grade. The
data points in Figure 5.2 represent the five-year cumulative average grade
(in weight ratio) versus the cumulative tonnage of ore. Using Bureau of
Mines records Cargill et al. (1981) found that the total amount of mercury
mined between 1890 and 1895 was Mm, and the tonnage of ore from which
this mercury was obtained was M , ; the mean grade for this period was c, =
Mm,IM,.The cumulative amount of mercury mined between 1890 and 1900
was Mm, and the cumulative tonnage of ore from which the mercury was
mined was M,; the mean cumulative grade for this period was c, = Mm21M2.
ORE GRADE AND TONNAGE 91
These computations represent the two data points farthest to the left in Fig-
ure 5.2. The other data points represent the inclusion of additional five-year
periods in the computations. Cargill et al. (1980, 1981) and Cargill (1981)
further hypothesized that the highest-grade ores are usually mined first so
that the cumulative ratio of mineral tonnage to ore tonnage at a given time is
a good approximation to the mean ore grade of the highest-grade ores. Thus
it is appropriate to compare their data directly with the fractal relation (5.44).
Excellent agreement is obtained taking D = 2.01. This is strong evidence
that the enrichment processes leading up to the formation of mercury de-
posits are scale invariant.
It is also of interest to introduce a reference concentration of mercury
into the fractal relation. An appropriate choice is the mean measured concen-
tration in the continental crust. The mean crustal concentration of mercury as
given by Taylor (1964) is c, = 8 X 10-8 (0.08 ppm). Using this value in
(5.44) we find that the correlation line in Figure 5.2 is given by
with M in kilograms. According to the fractal model the mercury ore in the
United States has been concentrated from continental crust with a mass
M, = 4.05 X 10'7 kg. Assuming a mean crustal density of 2.7 X 103 kg m-3,
the mercury resources of the United States were concentrated from an origi-
nal crustal volume of 1.5 X 105 km3. Since the total crustal volume of the
United States is approximately 2.7 X 108 km3, the source volume for the
mercury deposits is about 0.05 percent of the total. It is concluded that the
processes responsible for the enrichment of mercury ore deposits are re-
stricted to a relatively small fraction of the crustal volume.
It is seen from Figure 5.2 that the cumulative production of 1.2 X 108 kg
of mercury has been obtained from 2 X 1010kg of ore of volume 7.4 X 106 m3.
Since the source region has a volume of 1.5 X 105 km3, the fraction of the
source region that has been mined is only 5 X 10-8. The results given in Fig-
ure 5.2 can also be used to determine how much mercury ore must be mined
in the future to produce a specified amount of mercury. To produce the next
1.2 X lo8 kg of mercury will require the processing of about 1.6 X 1011 kg
of ore.
Using production records of lode gold, Cargill (1981) gave cumulative
tonnage-grade data for lode gold production in the United States. The data
points in Figure 5.3 represent the five-year cumulative average grade versus
the cumulative tonnage of ore for the period 1906-1976. A good correlation
with the fractal relation (5.44) is obtained taking D = 1.55. This fractal di-
mension is somewhat less than the value obtained for mercury, indicating a
smaller enrichment factor.
Again, the mean crustal concentration is introduced as a reference con-
centration. Taking co = 3 X 10-9 (3 ppb) (Taylor and McLennan, 1985) for
gold, we find the correlation line in Figure 5.3 is given by
with M in kilograms. According to the fractal model the lode gold in the
United States has been concentrated from a continental crustal mass of 3 X
1018 kg. Assuming a mean crustal density of 2.7 X 103 kg m-3, the gold was
concentrated from a crustal volume of 106 km3 or about 0.4 percent of the to-
tal crustal volume.
Using copper production records in the same way, Cargill et al. (1981)
have also given cumulative grade-tonnage data for copper production in the
United States. Their results are given in Figure 5.4. The cumulative grade is
again given as a function of cumulative ore tonnage at five-year intervals.
The data obtained prior to 1920 fall systematically low compared to the later
data. Cargill et al. (1981) attributed this systemic deviation from a fractal
correlation to the adoption of an improved metallurgical technology for the
extraction of copper in the 1920s. A smaller fraction of the available copper
was extracted prior to this time so that the data points are low. It is again ap-
propriate to compare these data with the fractal relation (5.44). Assuming the
early data to be systematically low, excellent agreement is obtained taking
D = 1.16. This fractal dimension is almost a factor of two less than the frac-
tal dimension obtained for mercury ore. This indicates that the applicable en-
richment processes concentrate copper less strongly than they do mercury.
We again relate the fractal relation for the enrichment to the mean crustal
concentration. The mean concentration of copper in the upper crust as given
by Taylor and McLennan (1981) is C, = 2.5 X 10-5 (25 ppm). Using this
value in (5.44), we find that the correlation line in Figure 5.4 is given by
with M in kilograms. According to the fractal model, the copper ore in the
United States has been concentrated from continental crust with a mass
M,, = 3.22 X 1019 kg. Assuming a mean upper crustal density of 2.7 X 103
kg m-3, the copper resources of the United States were concentrated from an
original crustal volume of 1.19 X 107 km3. This represents about 4 percent
of the total crustal volume of the United States. The crustal volume from
which copper is enriched is nearly 100 times larger than the volume from
which mercury is enriched. It is concluded that the processes responsible for
the enrichment of copper are much more widely applicable than those for
mercury.
As our final example we consider data on the relationship between cu-
mulative tonnage and grade for uranium in the United States. Data for the
preproduction inventory as given by the US Department of Energy have
been tabulated by Harris (1984, p. 228) in terms of cumulative tonnage and
the average grade of this tonnage; these data are tabulated in Figure 5.5. The
high-grade data are based on production records and the lower-grade data
are based on estimates of reserves. The higher-grade data are in excellent
agreement with the fractal relation (5.44) taking D = 1.48. Thus the enrich-
ment of uranium is intermediate between the enrichment of copper and
mercury. The predicted cumulative tonnage at lower grades falls below the
extrapolation of the fractal relation; this can be attributed to an underestima-
tion of the preproduction inventory at low grades.
It is again instructive to relate the fractal relation for the enrichment of
uranium to the mean crustal concentration. The mean concentration of ura-
nium in the upper crust as given by Taylor and McLennan (1981) is C, =
1.25 X (1.25 ppm). Using this value in (5.44), we find that the correla-
tion line in Figure 5.5 is given by
with M in kilograms. According to the fractal model the uranium ore in the
United States has been concentrated from continental crust with a mass
M, = 6.4 X 10'7 kg. Assuming a mean crustal density of 2.7 X 103 kg m-3,
the uranium resources of the United States were concentrated from an origi-
nal crustal volume of 2.4 X lo5 km3. This represents about 0.09 percent of
the crustal volume of the United States. The crustal volume from which ura-
nium is enriched is about a factor of two larger than the crustal volume for
mercury but is a factor of 50 less than the crustal volume for copper.
In several examples the statistics on ore tonnage versus ore grade have
been shown to be fractal to a good approximation. This is not surprising
since two of the classic models for the generation of ore deposits, chromato-
graphic and Rayleigh distillation, both lead directly to fractal distributions.
The examples considered here yield a considerable range of fractal dimen-
sions: 2.01 for mercury, 1.55 for gold, 1.48 for uranium, and 1.16 for copper.
If Rayleigh distillation were applicable then from (5.43), the applicable
liquid-solid partition functions would be 0.33 for mercury, 0.48 for gold,
0.49 for uranium, and 0.61 for copper. It should be emphasized, however,
that the chromatographic model is a more likely explanation for the concen-
tration of these minerals.
Not all mineral deposits and related statistical data satisfy power-law
(fractal) distributions. A specific example is the frequency-size distribution
of diamonds (Deakin and Boxer, 1986).
There is also evidence that the frequency-size distribution of oil fields obeys
fractal statistics (Barton and Scholz, 1995). Drew et al. (1982) used the rela-
tion Ni-, = 1.67N to estimate the number of fields of order i, N j , in the west-
ern Gulf of Mexico. Since the volume of oil in a field of order i is a factor of
two greater than the volume of oil in a field of order i - 1, their relation is
equivalent to a fractal distribution with D = 2.22. Barton and Scholz (1995)
find D = 2.49 for the Frio Strandplain play, onshore Texas. The number-size
statistics for oil fields worldwide as compiled by Carmalt and St. John
(1984) are given in Figure 5.6. A reasonably good correlation with the fractal
relation (2.6) is obtained taking D = 3.3. The large differences between
these values for the fractal dimension may be attributed to differences in the
96 ORE GRADE AND TONNAGE
regional geology, but it may also be due to difficulties in the data. It is often
difficult to determine whether adjacent fields are truly separate, and data on
reserves are often poorly constrained. Nevertheless, the applicability of frac-
tal statistics to petroleum reserves can have important implications. Reserve
estimates for petroleum have been obtained by using power-law (fractal) sta-
tistics and log-normal statistics. Accepting power-law statistics leads to con-
siderably higher estimates for available reserves (Barton and Scholz, 1995;
La Pointe, 1995; Crovelli and Barton, 1995).
The model for the concentration of economic ore deposits given above
leads to a range of geometrically acceptable fractal dimensions. However,
the observed distribution for oil fields falls outside this range. This again il-
lustrates the difficulties associated with restrictions on power-law .(fractal)
distributions. As stated previously, we define a power-law statistical distrib-
ution as a fractal distribution.
It should not be surprising that the frequency-size statistics of oil pools
and oil fields are fractal; it was shown in Chapter 2 that topography is gener-
ally fractal. One consequence is that the frequency-size statistics of lakes
has been found to be fractal (Maybeck, 1995). Because traps for oil involve
topography on impermeable sedimentary layers, it is expected that this
topography will also be fractal. Thus it is reasonable that the frequency-size
distribution of oil pools is fractal.
Barton and Scholz (1995) have examined the spatial distribution of hy-
drocarbon accumulations and have concluded that they obey fractal statis-
tics. Their results for production from the J sandstone of the Denver basin
are given in Figure 5.7. Production from this basin is primarily in the north-
east comer of Colorado and the southwest comer of Nebraska. A 40 X 40-
mile section of the basin is considered and this section is divided into 80 X
80 square cells of size 0.5 miles. The cells with one or more wells are illus-
trated with black dots in Figure 5.7 as drilled cells. The cells with one or
more wells that are either producing or had a show of hydrocarbons but at
quantities too small to produce are illustrated with black dots in Figure 5.7 as
producing or showing cells.
The box-counting technique was applied to both the drilling data and the
producing and showing data. The number of occupied boxes as a function of
the reciprocal of the box size is given in Figure 5.7 for both data sets. In both
cases good correlations were obtained with the fractal relation (2.2). For the
drilled cells the fractal dimension was D = 1.80; if every cell had been
drilled the fractal dimension would have been D = 2.0. For the producing
and showing cells the derived fractal dimension was D = 1.43. This result
indicates that the complex processes responsible for the generation of petro-
leum traps leads to a fractal spatial distribution of oil pools. Barton and
Scholz (1995) also examined the spatial distribution of hydrocarbon accu-
mulations in the Powder River basin, Wyoming, and found a good correla-
tion with fractal statistics taking D = 1.49.
Carlson (1991) examined the spatial distribution of 4775 hydrothermal
precious-metal deposits in the western United States and found that the
probability-density distribution for these deposits is fractal. Blenkinsop
(1994) found similar results for gold deposits in the Zimbabwe Archean
craton.
Problems
Problem 5.1. Determine the concentration factor $* for an ore deposit with
D = 2.
Problem 5.2. Determine the concentration factor $, for an ore deposit with
D = 1.
Problem 5.3. Determine the solid-liquid partition coefficient K , correspond-
ing to an ore deposit with D = 2.
Problem 5.4. Determine the solid-liquid partition coefficient K , correspond-
ing to an ore deposit with D = 1.
Problem 5.5. Consider the cubic model for mineral concentration illustrated
in Figure 3.6. (a) In terms of the enrichment factor $, defined by (5.26)
and C,, what is the concentration in the seven depleted zero-order ele-
ments? (b) What is the concentration in the seven depleted first-order el-
ements? (c) What is the allowed range for $,? (d) What is the corre-
sponding allowed range for D?
Problem 5.6. From the correlation for mercury production given in (5.45),
how much pure mercury ( c = 1) would be expected?
Problem 5.7. From the correlation for mercury production given in (5.45),
determine the total production of mercury when the mean grade of ore
that has been mined reaches C = 0.001.
Problem 5.8. From the correlation for lode gold production given in (5.46),
how much pure gold (c = 1 ) would be expected in the United States?
ORE GRADE AND TONNAGE 99
Problem 5.9. From the correlation for lode gold production given in (5.46),
determine the total amount of lode gold mined to date. Assume that the
mean grade of ore mined prior to the present is C = 9 ppm.
Problem 5.10. From the correlation for copper production given in (5.47) de-
termine the total production of copper to date. Assume that the mean
grade of ore mined prior to the present time is = 0.008.
Problem 5.1 1. From the correlation for copper production given in (5.47),
how much pure copper (C = 1) would be expected?
Problem 5.12. The fractal dimension for the distribution of areas of lakes has
been found to be D = 1.55 (Kent and Wong, 1982). Assuming that the
mean depth of a lake is proportional to the square root of its area, what is
the fractal dimension for the distribution of water volumes in lakes?
Problem 5.13. Consider the data for the 40 X 40-mile section of the Denver
basin given in Figure 5.7. What fraction of 1 X 1-mile sections would be
expected to contain oil?
Chapter Six
FRACTAL
CLUSTERING
6.1 Clustering
We next relate fractal distributions to probability. This can be done using the
sequence of line segments illustrated in Figure 2.1. The objective is to deter-
mine the probability that a step of length r will include a line segment. First
consider the construction illustrated in Figure 2.1 (a). At zero order the prob-
ability that a step of len th r, = 1 will encounter a line segment, p, = 1; at
f
first order we have r , = 2 and p , = 2, 1
and at second order r, = 31 and p, = 41 .
Next consider the construction illustrated in Figure 2.l(c). At zero order the
probability that a step of len th r, = 1 will encounter a line segment is p, =
f
1; at first order we have r , = 2 and p , = 1, and at second order r, = 41 and p, =
1. Finally we consider the Cantor set illustrated in Figure 2.l(e). At zero or-
der the probability that a set of length r, = 1 will encounter a line segment is
2
p, = 1 ; at first order we have r , = 31 and p , = 3, and at second order r, = 31 and
4
PZ = 9 .
The probability that a step of length ri will include a line segment can be
generalized to
For the Cantor set the probability that a step of length ri = (f)' encounters a
line segment is pi= ($1' so that D = In 2Iln 3 as was obtained previously.
The Cantor set is both scale invariant and deterministic. Its deterministic
aspect can be eliminated quite easily. A scale-invariant random set is generated
by randomly removing one-third of each line rather than always removing the
FRACTAL CLUSTERING 101
middle third. This process is illustrated in Figure 6.1. The fractal dimension is
unchanged and the probability relations derived above are still applicable.
We will use the examples given above as the basis for studying fractal clus-
tering. We consider a series of point events that occur at specified times. To con-
sider N point events that have occurred in the time interval i0we introduce the
natural period T , = i d N . We then introduce a sequence of intervals defined by
n
where fm(pm)= 1; the binomial coefficient is defined by
m=O
clustering by the "box method," we take intervals of length r,, = 2" and deter-
mine the fraction p that include at least one ninth-order element as a function
of rn. An example is given by the open circles in Figure 6.2. The best-fit
-
straight line has a slope of 0.368 so that p T-0.368 and D = 0.632. The devia-
tion from the exact value D = 0.6309 for the deterministic Cantor set is due
to the reduced rate of curdling in the probabilistic set. If the same number of
ninth-order elements is uniformly distributed (no clustering), the probability
of finding an element with an interval from (6.4) is given by the solid circles
in Figure 6.2. In this case, the slope is unity for r < (;)9 and zero for r > (;)9.
Thus, D = 0 for r < (;)9, that is, a set of isolated points, and D = 1 for r > ($)9,
that is, a line.
Fractz! clustering has been applied to seismicity by Sadovskiy et al.
(1985) and by Smalley et al. (1987). The latter authors considered the tem-
poral variation of seismicity in several regions near Efate Island in the New
Hebrides island arc for the period 1978-1984. One of their examples is
given in Figure 6.3. During the period under consideration 49 earthquakes
that exceeded the minimum magnitude required for detection occurred in
the region. Time intervals T such that 8 min 1 T I524,288 min were con-
sidered. The fraction of intervals with earthquakes p as a function of interval
length T is given in Figure 6.3(a) as the open circles. The solid line shows
the correlation with the fractal relation (6.2) with D = 0.255. The dashed
line is the result for uniformly spaced events. The results of a simulation
for a random distribution of 49 events in the time interval studied is given
in Figure 6.3(b). The random simulation (Poisson distribution) is signifi-
cantly different from the earthquake data and is close to the uniform distri-
bution.
For the Sierpinski carpet the probability that a square box of size ri = (fY
will include a retained square is pi = ($)' so that D = In 8nn 3, as was previ-
ously found. The Sierpinski carpet can be applied to clustering in two di-
mensions in the same way that the Cantor set was applied in one dimension.
This is directly analogous to the box-counting algorithm discussed in Chap-
ter 2 and illustrated in Figure 2.8. The two-dimensional spatial clustering of
intraplate hot spot volcanism (i.e., Hawaii, etc.) has been studied by Jurdy
and Stefanick (1990). They found a fractal correlation with D = 1.2.
This approach can be extended to three dimensions using cubes of various
sizes. The application to three dimensions is illustrated using the Menger
sponge given in Figure 2.4(a). The objective is to determine the probability
that a cube with size r encounters retained material. At zero order the probabil-
ity that a cube of size ro = 1 will include material is p - 1;at first order we have
20
r , = 51 and p , = 2 01- 400
, and at second order we have r, = 9 and p, = m. The proba-
bility that a cube of size ri includes retained material can be generalized to
For the Menger sponge the probability that a cube of size ri = (f)' encounters
retained material is pi= (E)' so that D = In 20nn 3 as was previously found.
The generalization of (6.2),(6.9)and (6.11)is
6.2 Pair-correlationtechniques
\
QQ0
D=l-a=O 631
4
*
0
L.p-L
o9 12 ' 5
log r
18
.L
2 1 74
corresponding fractal
dimension is D = 0.631
fractal dimension for the(the
Cantor set is D = In 2nn 3 =
0.6309).
108 FRACTAL CLUSTERING
1A65. col
6.3 Lacunarity
It is clear that fractal constructs with identical fractal dimensions can have
quite different appearances. One example is the deterministic Cantor set il-
lustrated in Figure 2. l (e) compared with the random Cantor set illustrated in
Figure 6.1. Third-order examples of these sets are given in Figure 6.7. The
difference between these two sets is the distribution of the size of gaps. Man-
delbrot (1982) introduced the concept of lacunarity as a quantitative measure
of the distribution of gap sizes. Large lacunarity implies large gaps and a
clumping of points; small lacunarity implies a more uniform distribution of
gap sizes. Also included in Figure 6.7 are examples of a near uniform distri-
bution (near zero lacunarity) and a totally clumped distribution (high lacu-
narity). In each case a line segment with a length of 27 is divided into 27
equal parts, each of unit length, and 8 are retained.
Allain and Cloitre (1991) have introduced a quantitative measure of la-
cunarity, which we will use below. Alternative measures have been given by
Gefen et al. (1984) and by Lin and Yang (1986). The technique given by Al-
lain and Cloitre (1991) is illustrated in Figure 6.8. We consider the third-
order Cantor set given in Figure 6.7(b). The total length is r, = 27 and indi-
vidual segments have unit length ( r = I). We consider a moving window of
length r, which is translated in unit increments. The total number of steps
considered is given by
For the example illustrated in Figure 6.7 we have M, (9) = $ and M, (9) =
%. The lacunarity L is defined in terms of the moments by
0
0 0 Uniform
x Cantor
0
+ Random Cantor
0 o Clumped
6.4 Multifractals
where Liis the length of line in segment i and L is the total length of line.
Since
n
i=l
Li= L,we have xfi = 1. The quantityfi is the probability that the re-
n
i=l
maining line segment is found in "box" i.
For the third-order Cantor set of unit length illustrated in Figure 6.11(a),
we have L = $. We now determine the values off, for three cases, n = 1
(r = I), n = 3 ( r = 3). and n = 9 (r = i).
Taking n = 1 (r = 1) we have i = 1; in
this one segment we have L,= $ and from (6.22) obtain f,= 1. This is illus-
trated in Figure 6.1 l(b). For n = 3 ( r = i) we have i = 1, 2, 3; from Figure
6.11(c) we obtain L,= A, L2 = 0,L3 = $ and from (6.22) find f,= f2= 0,4,
f3= i. With n = 9 ( r = ,$)we have i = 1, 2, 3, 4, 5, 6, 7, 8, 9; from Figure
6.11(d) we obtain L,= &, L2 = 0,L3 = $, L4 = L5 = L6 = 0,L, = 27, 2
L8 = 0,
=a,
L, = &; and from (6.22) find f, f2= 0,f3=:, f4=f5=f6= 0,f, = $, f, = 0,
f ='.
9 4
It is next necessary to define generalized moments M4(r) of the set of
fractionsf,(r). This is done using the relation
where the sum is taken over the set of fractions and q is the order of the mo-
ment; (6.23) is valid for both integer and noninteger values of q as long as
q # 1. The special case q = 1 will be considered below. For the example
given in Figure 6.11, we can obtain the moments of the distribution for any
order q (except q = 1) using (6.23). We first take q = 0 and find the zero-
order moments for r = I , $,and ,$ with the result:
Note that any finite number raised to the power 0 is 1, but 0 raised to the
power of 0 is 0. We next take q = 2 and find the second-order moments for
i,
r = 1, and ,$ with the result:
M, ( I ) = 12 = 1
FRACTAL CLUSTERING 115
exp=
~ 1 +E (6.25)
fq = f [ l + ( 9 - 1) l n f l
i=l
= 1 we have
where
s = - (a)
(i) In ($) - 0 In 0 - ($)In (a) - 0 In 0 - 0 In 0 - 0 In 0
n
where A, is the area in box i and A is the total retained area. Again Ai = A
n
so that Z J= 1
i=l
For the second-order Sierpinski carpet illustrated in Figure 6.12(a) we
have A = g. We now determine the values of fn for two cases, n = 1 (r = 1)
1
and n = 9 ( r = j). Taking n = 1 we have i = 1; in this one area we have A, =
3 and from (6.40) f , = 1. This is illustrated in Figure 6.12(b). For n = 9 we
have i = 1, 2, 3, 4, 5, 6, 7, 8, 9 as illustrated in Figure 6.12(c); we have A , =
8 8
A, = A3 = A4 = 8 , A5 = 0, A6 = A, = A8 = A9 = 81. From (6.40) we have
1
f , = f , = f 3 = f , = 8 , f 5 = 0 , f6=f,=f8=f9=$.From(6.23) we find
120 FRACTAL CLUSTERING
where Ciis the mean concentration in segment i and Cois the overall mean
concentration.
We now determine the values off;. for n = l(r = I),n = 2 (r= i),
n=4
(r = a), and n = 8 (r = k).
Taking n = 1 we have i = 1, C, = C,,
and f,= 1; for n = 2 we have i = 1, 2, C,= +,C,,C2= (2 - 4,) C,,
f,= (1)+2, f2= 1 - (i)+2; for n = 4 we have i = 1, 2, 3, 4,C,= +,2 Co,
c2= c3= +2 (2 - +J cO, c4= (2- +,)2 cO, f,= ($1$2 , f2= f3= (1) +2
[l - (1)+,I, f4= [l - (1)+,I2; for n = 8 we have i = 1, 2, 3,4,5 , 6,7,8,
+;
c,= c,,c2= c3= c4= +; (2 - +,) C,,C5= C6= C,= +,(2 - +,)2Co,Cg
= (2- +,I3, fl= (i) f2= f3 = f4 = ( f ) +$ - ($1 $21 f 5 = f6 = f7 =
+;9
(1)+2 [l - (4) +2]2, fa= [l - (1)+2]3. These results are illustrated in Figure
We can now determine the generalized moments for the De Wijs multi-
we find for q = 0 that
plicative cascade. From (6.23)
+ [(1 - ;+2)3]0 = 1 + 3 + 3 + 1= 8
with q # 1 and 0 < +, < 2. Note that this result is also valid for noninteger
values of q.
For q = 1 from (6.38) we find
s(1) = - llnl =O
which is valid for both integer and noninteger values of q except q = 1. For
q = 1 we find from (6.37) that
2. The fractions& for each box of size r,, is determined. The length of
faults and joints in box i is L,. The fraction of the faults and joints in
box i,f,,is determined using (6.22).
3. The generalized moments Mq (r) are obtained using (6.23) if q ;t 1
and (6.28) if q = 1.
4. If the Mq ( r ) have a power-law dependence on r for specified values
of q, then the fractal dimensions Dq are obtained using (6.3) if q # 1
and (6.37)if q = 1.
5. The Dq are given as a function of q over the range 0 5 q < -.
N, = (7) =
n!
j!(n - j)!
i=O
N, = 1 rather than xn
i=l
f ; = 1 as in the
previous analysis. Substituting (6.46)and (6.47)into (6.49)gives
logj! =j l o g j -j (6.56)
a = 1 - (1 - x)----log +, x 1% ( 2 -
-
$2)
log 2 log 2
Problems
Problem 6.1. Consider the construction given in Figure 2.l(b). What is the
probability that a step of length r includes a line segment for r = 1, i,
1 L7
99 2 7 '
Problem 6.2. Consider the construction given in Figure 2.l(d). What is the
probability that a step of length r includes a line segment for r = 1, 5,
1 I?
97 27
Problem 6.3. Consider the construction given in Figure 2.l(f). What is the
probability that a step length r includes a line segment for r = 1, $, &,?
Problem 6.4. A line segment is divided into seven equal parts and four are re-
tained. The construction is repeated. What is the probability that a step
of length r includes a line segment for r = i , &, & ?
Problem 6.5. A line segment is divided into seven equal parts and three are
retained. The construction is repeated. What is the probability that a step
3, A?
of length r includes a line segment for r = &,
130 FRACTAL CLUSTERING
Problem 6.6. Consider the construction given in Figure 2.3(c). What is the
probability that a square box with dimensions r includes a retained
square when r = 1 1 ?,1,,
Problem 6.7. A unit square is divided into four smaller squares of equal size.
Two diagonally opposite squares are retained and the construction is re-
peated. What is the probability that a square box with dimensions r in-
cludes a retained square when r = 1, $, i?:,
Problem 6.8. A unit square is divided into 25 smaller squares of equal size.
All the squares on the boundary and the central square are retained and
the construction is repeated. What is the probability that a square box
with dimensions r includes a retained square when r = 1, i , &?
Problem 6.9. Consider the construction given in Figure 2.4(b). What is the
probability that a cube with dimensions r includes solid when r = 1, $,
I I?
49 8 '
Problem 6.10. A unit cube is divided into 27 smaller cubes of equal volume.
All the cubes are retained except for the central one and the construction
is repeated. What is the probability that a cube with dimensions r in-
cludes solid when r = 1, &, ;?
Problem 6.11. What is the pair-correlation distribution for three equally
spaced particles on a line of unit length?
Problem 6.12. What is the pair-correlation distribution for three particles on
the comers of an equilateral triangle with sides of unit length?
Problem 6.13. What is the pair correlation distribution for the eight particles
on the corners of a unit cube?
Problem 6.14. Consider the third-order Cantor set illustrated in Figure
6.1 l(a). Determine M3 (I), M3 ($)and M3 (6). Write an expression for D3
in terms of ri and r, and determine its value for the third-order Cantor set.
Prob!em 6.15. Consider the third-order Cantor set illustrated in Figure
6.11(a). Determine MI, (I), M,,,($), and M,,,($). Write an expression for
Dl,, in terms of ri and rj and determine its value for the third-order Can-
tor set.
Problem 6.16. Consider the second-order set illustrated in Figure 2.l(f) ( L =
&). Determine L, and fi for n = 1 and n = 5, determine Mo(l), M&),
M2(f),411, ~ ( 3Do,
) ~Dl, and D,.
Problem 6.17. A line segment is divided into seven equal parts and four are
retained (L = $).Determine Li and& for n = 1 and n = 7. Determine Mo
(11, Mo ($),M2(lI9M2(+I, s (I), s ($1,Do, Dl, andD2.
Problem 6.18. Consider the second-order Sierpinski carpet illustrated in Fig-
ure 6.12a. Determine M3 (1) and M3 ($);determine D,.
Problem 6.19. A unit square is divided into four smaller squares of equal
4.
size. Two diagonally opposite squares are retained, A = Determine Ai
FRACTAL CLUSTERING 131
SELF-AFFINE
FRACTALS
5-
4 -- V.E. = 1:270
7.2Time series
with
and
The time s is the lag; with s = 0 we have cS= co = V (the variance) and rs = 1.
As s increases, rs generally decreases as the statistical correlations of y (t +
s) with y(t) decrease. The plot of rs versus s is known as a correlogram. A
rapid decay of the correlogram indicates weak persistence (short memory),
and a slow decay indicates strong persistence (long memory). Since the time
series is continuous, it is required that rs + 1 as s + 0.
For a discontinuous time series the autocorrelation function is given by
with
138 SELF-AFFINE FRACTALS
and
Note that neither the mean j nor the variance V is used in this definition. For
a discontinuous time series we have
This result is compared with each of the four Brownian walks illustrated in
Figure 7.4(b). A Brownian walk is an example of a statistical self-affine
fractal.
The association of white noises and Brownian walks is the basis for the
kinetic theory of gases. The distribution of distances in a specified direction
that a molecule in a gas travels between collisions is Gaussian. Thus the se-
quence of distances that a molecule travels in a gas is a Gaussian white
noise. The sum of these distances, the distance the molecule diffuses in the
gas, is a Brownian walk. This is the basic reason that the distance that a con-
taminant diffuses in a gas scales with the square root of time.
Several empirical models have been developed to produce persistent
(correlated) noises (Bras and Rodriguez-Iturbe, 1993). We first consider the
moving average model (MA). In this model the discrete times series is given
by
where E~ is again the random variable described above and the 0, (O,, 0,, . . . ,
Oq) are q prescribed coefficients relating yi to the q previous values of ei.The
parameters in this model are the mean j, the variance of the white noise u,2,
and O,, I,.,. . , Oq.Taking q = 1 the MA model simplifies to
SELF-AFFINE FRACTALS 141
rk = - for k = l
1 + 0;
The correlation is very short since only the adjacent point has a non-zero au-
tocorrelation function. Examples of this time series with 0, = 0,0.2, 0.5,0.9
are given in Figure 7.5.
An alternative empirical model for a persistent (correlated) time series is
the autoregressive model (AR). This time series is given by
+,
where ei is again the random variable and ( j= 1,2, . . . ,p) are prescribed
coefficients relating y i to the p previous values of yi - y. Clearly the MA and
AR models are closely related. In the MA model q previous values of the
random variable ei are included, and in the AR model p previous values of
the deviation of the time series from the mean y, - j are included. Taking
p = 1 the AR model simplifies
L
The mean for this correlated noise is again and its variance is
The AR model has longer range correlations than the MA model, but the corre-
lations remain short range. Examples of this time series with 4, = 0,0.2,0.5,
142 SELF-AFFINE FRACTALS
and 0.9 are given in Figure 7.6. There is clearly much greater smoothing of the
time series in the AR than in the MA model. Correlograms for these time series
are given in Figure 7.7. The agreement with (7.22) is excellent; the correla-
+,
tions increase systematically with increasing values of as expected.
2
Yn 0
6
4
2
Yn 0
-2
Figure 7.6. Examples of
-4
autoregressive (AR) time
-6 series from (7.20) with
0 128 256 384 512
(a) 4, = 0, (b) 4, = 0.2,
n (c) 4, = 0.5, and (d) 4, = 0.9;
fd) in each case j = 0 and o: = 1 .
144 SELF-AFFINE FRACTALS
The variables in this model have been discussed above. With p = 0 the
ARMA model reduces to the moving average (MA) model, with q = 0, the
ARMA model reduces to the autoregressive (AR) model. Taking p = q = 1
the ARMA model simplifies to
where Ha is again the Hausdorff measure. Ahnert (1984) found that actual
topography is in excellent agreement with (7.28), taking Ha = 0.635 +
0.105. Similar results were obtained by Dietler and Zhang (1992). Topogra-
phy is an excellent example of a self-affine fractal.
146 SELF-AFFINE FRACTALS
Pr
;--D = constant
where the appropriate scale for the time series is T. In (7.31) the time series
diverges with the interval T according to the power law THO. Comparing
(7.31) and (7.32) we define
SELF-AFFINE FRACTALS 147
This is the basic definition of the fractal dimension for a self-affine fractal.
Below we give an alternative derivation that gives the same result. For the
deterministic self-affine fractal illustrated in Figure 7.2 we found Ha = $ and
D = in agreement with (7.33). For a Brownian walk we also have Ha = 4
a n d ~ = i . ~ 1o<r D < 2 w e r e q u i r e t h a t O < H a < 1.
An alternative derivation of the fractal dimension of a time series can be
obtained by using the box-counting method illustrated in Figure 7.3. We first
introduce a rectangular reference "box" with a width T and height a, = a(T).
Note that since the units of the signal y, and therefore the units of the stan-
dard deviation a , can differ from the unit of time t, the aspect ratio (width/
height) of the box can have arbitrary units. If we measure an electric current
as a function of time, the width of the box is in seconds and its height is in
amperes.
We next divide the time interval T into n smaller time intervals with the
length Ttl = T/n. We also introduce scaled smaller boxes of width T,, and
height a,, = a,/n. These boxes have the same aspect ratio as the reference
box. However, the standard deviation associated with the interval T,, a(Tn)=
a(T/n), is not equal to a,,. We determine the number of scaled smaller boxes
N,, of size T,, X anthat are required to cover the area of width T and height
aTtl.This is given by
A time series with a single periodic component will have a single spike in its
spectrum at that frequency. A time series with several components will have
spikes in its spectrum at those frequencies. A white noise has no embedded
frequencies and its spectrum is flat. The quantity I Y ( j T)12 df is the contri-
bution to the total energy of y(t) from those components with frequencies be-
tween f and f + d$ The vertical bars in I YI refer to the absolute value of the
complex quantity. The power is obtained by dividing by T. The power spec-
tral density of y(t) is defined by
in the limit T + -. The product S( f)df is the power in the time series associ-
ated with the frequency range between f and f + d$ For a time series that is a
self-affine fractal, the power spectral density has a power-law dependence
on frequency:
Substituting (7.42) and making the change of variable t' = rt, we obtain
From the definition of the power spectral density given in (7.40) we obtain
p=2Ha+1=5-20
For a self-affine fractal (0 < Ha < 1, 1 < D < 2) we have 1 < P < 3. For a
Brownian walk with Ha = $ (D = 23 ) we have P = 2.
The power PI2 is used because the power spectral density is propor-
tional to the amplitude squared. The amplitudes of the small-m coef-
ficients correspond to short wavelengths Am and large wave numbers
km = 2n/Am. The large-m coefficients correspond to long wave-
lengths and small wave numbers.
(4) An inverse discrete Fourier transform is taken of the filtered Fourier
coefficients. The sequence of points is given by
has been rescaled to have zero mean y = 0 and unit variance V = 1. The
fractional Gaussian noise in Figure 7.8(a) with P = 1.0 is statistically identi-
cal to the fractional Brownian walk in Figure 7.8(b) with P = 1.O.
As the value of p is increased the contribution of the short wavelength
(large wave number) terms is reduced. The result is that adjacent values in
the time series become increasingly correlated and profiles are smoothed.
The persistence in the time series is increased. This is clearly illustrated in
Figure 7.8 as P is increased from 0 to 3.0. With P = -0.5 and - 1.0 the short-
wavelength contributions dominate over the long-wavelength contributions.
These time series are antipersistent, and adjacent values are less correlated
than for the random white noise (p = 0).
An alternative method for the direct generation of fractional Brownian
walks is the method of successive random additions (Voss 1985a, 1988). Con-
sider the time interval 0 5 t I 1 as illustrated in Figure 7.9. Random
values of y are generated based on the Gaussian probability distribution given
in (3.15) with zero mean j = 0 and unit variance V, = 1. Three of these
i,
values are placed at t = 0, 1 as shown in Figure 7.9(a). Two straight lines are
drawn between these three points. The midpoints of these two line segments are
taken as initial values of y at t = $ and as illustrated in Figure 7.9(b).
The five points are now given random additions. These random addi-
tions are also based on the Gaussian probability distribution (3.15) with zero
mean 7= 0 but with a reduced variance given by (7.29). Since the interval
has been reduced by a factor of two, the variance is given by V2 = ( f ) ~ . . For
our example we take Ha = so that V2 = $.The five resulting random addi-
tions are given in Figure 7.9(c). After addition to the five values of y, in Fig-
ure 7.9(b), the resulting five values of y, are given in Figure 7.9(d). Again
the five points are connected by four straight-line segments and the four
midpoints are taken as initial values of y at t = Q, i,i, and g7 as illustrated in
Figure 7.9(e). All nine points are now given random additions using a Gauss-
ian probability distribution (3.15) with zero mean but a further reduced vari-
ance from (7.28) V3 = Again taking Ha = f we have V3 = 4. 1
The nine
random additions are given in Figure 7.9(f). After addition to the nine values
of y2 given in Figure 7.9(e), the resulting nine values of y, are given in Fig-
ure 7.9(g). The process is repeated until the desired number of points is ob-
tained. Our example with 4097 points is given in Figure 7.9(h). With Ha = i
and (3 = 2 this is a Brownian walk and strongly resembles the Brownian
walks given in Figures 7.4 and 7.8. A sequence of fractional Brownian walks
generated by the method of successive random additions is given in Figure
7.10. Fractional Brownian walks are given for Ha = 0 (P = l), Ha = 0.25
(P = 1.5)- Ha = 0.50 (P = 2), same as Figure 7.9(h), Ha = 0.75 (P = 2.5) and
Ha = 1.00 ( p = 3); in each case 4097 points are given. As expected, these
noises closely resemble those generated by the filtering technique given in
Figure 7.8. A detailed comparison of fractional Gaussian noises and frac-
tional Brownian walks using the Fourier filtering technique and the method
of successive random additions has been given by Gallant et al. (1994).
These authors also considered a third method using Weierstrass- Mandelbrot
functions.
SELF-AFFINE FRACTALS 153
- .
0.0 (a)
Just as the fractional Gaussian noises generated using the filtering tech-
nique with - I < p I 1 can be summed to give fractional Brownian walks
with 1 I p I 3, the fractional Brownian walks generated using the method of
successive random additions with 1 < P < 3 can be differenced to give frac-
tional Gaussian noises with - 1 6 p I 1. Extended fractional Gaussian noises
with -3 I p I - 1 can be obtained by differencing the fractional Gaussian
noises with - 1 I p < 1. Similarly extended fractional Brownian walks with
3 I p I 5 can be obtained by summing fractional Brownian walks with 1 I
(3 I 3. Although the mathematical definition of self-affine fractals restricts
the applicable range of P to 1 I P I 3, naturally occurring time series with a
power-law dependence of the power spectral density on frequency have val-
ues of p outside this range. Just as naturally occurring self-similar power-
law distributions may or may not fall within the range of D values prescribed
by mathematical constraints, so too naturally occurring self-affine time se-
ries may or may not fall within the range of f3 values prescribed by mathe-
matical constraints.
Using the definition of the semivariance y, given in (7.9), semivari-
ograms for several of the fractional Gaussian noises and fractional Brownian
walks illustrated in Figure 7.8 are given in Figure 7.11. For the uncorrelated
Gaussian white noise, p = 0, the semivariance scatters statistically about y =
1 as expected since V = 1. For P = 1, 2, and 3 excellent correlations are ob-
tained with the fractal relation (7.30). For P = 2 we find H a = 0.47 compared
to the expected value Ha = 0.50.
The values of Ha obtained for the best fit of (7.30) to the semivari-
ograms in the range - 1 I p 5 5 are given in Figure 7.12. The straight-line
correlation is with the self-affine fractal relation (7.48). Quite good agree-
ment is found in the range 1 < P < 3, where the fractional Brownian walks
are expected to be self-affine fractals.
From Figure 7.12 it is seen that Ha .= 0 for fractional Gaussian noises in
the range - 1 < p < 1. From (7.29) and (7.35) we conclude that the variance
V and standard deviation a are not dependent on the length of the signal T.
Thus these fractional noises are stationary even though adjacent values may
be correlated or anticorrelated. The fractional Brownian walks in the range
1 < p < 3 are clearly nonstationary from (7.29) and (7.35) since H a varies
from 0 to 1 and V and a have a power-law dependence on the length of the
signal T.
The fractional Gaussian noises and fractional Brownian walks we have con-
sidered have both been based on a Gaussian distribution of values. Thus the
resulting time series have both positive and negative values. Many naturally
occurring time series have only positive values. For example, the volumetric
flow in a river Q ( t ) is always positive. Another example is the density or
porosity in a well, which is also always positive. The coefficient of variation
cVis the ratio of the standard deviation of the signal to its mean (3.33). If
(a) c , = 0.2
(b) C , = 0.5
stream of the dam Q(t). The flow out of the reservoir Q(T) is assumed to be
the mean of the flow into the reservoir for a period T
V(t) = V(0) +
1Q(tf)dt' - tQ (7.53)
with
and and a, are obtained from (3.1) and (3.3). The Hurst exponent, Hu,is
obtained from
The values of Hu obtained for the best fit to the Hurst relation (7.60) in
the range -3 I 5 3 are given in Figure 7.16. The straight-line correlation is
with (7.61). Reasonably good agreement is found in the range - 1 < P < 1.
The Hurst exponent provides a quantitative measure of persistence and an-
tipersistence for fractional Gaussian noises. Extensive R/S analyses of frac-
tional Gaussian noises and fractional Brownian walks have been carried out
by Bassingthwaighte and Raymond (1994).
Several closely related techniques have been introduced to quantify the
self-affine properties of observed time series. Malinverno (1990) introduce
the roughness-length method. In this method the local trend is determined as
a function of window length. Ivanov (1994a, b) introduced counterscaling.
Two types of counterscaling were considered. In the first the variance was
determined for different window lengths. In the second the means were ob-
tained for various window lengths and the variances of these means were ob-
tained as a function of window length. Gomes Da Silva and Turcotte (1994)
have applied the counterscaling technique to fractional noises and walks.
Three examples of elevation along linear tracks were given in Figure 7.1.
These are equivalent to time series and are examples of naturally occurring
self-affine fractals. Many authors have carried out Fourier spectral analyses
of topography and bathymetry along linear tracks (Bell, 1975, 1979; Berk-
son and Mathews, 1983; Barenblatt et al., 1984; Fox and Hayes, 1985;
Gilbert and Malinverno, 1988; Fox, 1989; Gilbert, 1989; Malinverno, 1989,
1995; Mareschal, 1989). In general a power-law dependence of the power
spectral density on wave number was found with P = 2 (D = 1.5). Similar
results were obtained for Venus by Kucinskas et al. (1992). Thus the eleva-
tion of topography is approximately a Brownian walk. Twenty-four exam-
ples of the dependence of the power spectral density on wave number for lin-
ear topographic profiles from three different parts of Oregon are given in
Figure 7.17. One-dimensional Fourier spectral analyses were obtained using
the periodogram method. Three different regions were considered with dif-
ferent geomorphic and tectonic settings. The Willamette lowland is domi-
nated by sedimentary processes, the Wallowa Mountains are associated with
a major tectonic uplift, and the Klamath Falls area belongs to the basin and
range tectonic regime. The topography was digitized along lines of latitude
and longitude at seven points per kilometer. For each of the three regions, 20
equally spaced one-dimensional profiles of length 5 12 points were analyzed
in both the latitudinal and longitudinal directions. Log-log plots of the spec-
tral power density versus wave number show a good power-law dependence
in all three regions, as shown in Figure 7.17. Eight typical examples are given
for each of the three regions. The best-fit fractal dimension for each profile
is obtained using (7.41). The mean fractal dimensions for three regions are
given in Table 7.1. The mean values are close to D = 1.5, indicating that the
spectral power density corresponds to a Brownian walk to a good approxi-
mation. A variety of previous studies have found values for D near 1.5.
Power Spectra
track 8
track 7
track 6
track 5
track I
Klarnath
Two implications of this result will be discussed. The first is the compar-
ison with the value of D for topography obtained in Chapter 2 using the ruler
method. As illustrated in Figure 2.7 the ruler method generally gives fractal
dimensions near D = 1.2. These are systematically lower than the values near
D = 1.5 obtained using the spectral method. Fundamentally there is no rea-
son why the two fractal dimensions should be equal. Elevation profiles are
not necessarily related to the shape of contours.
The correspondence of topography and bathymetry to a Brownian walk
also implies, importantly, that they are truly self-similar. For a Brownian
walk the amplitude coefficients are directly proportional to the correspond-
ing wavelengths. Thus the height-to-width ratios of mountains and hills are
the same at all scales.
It should also be noted that the power-law spectra given in Figure 7.17
provide further information beyond the fractal dimension. The spectra are
characterized by the amplitude in addition to the slope. A quantitative mea-
sure of the amplitude is the intercept (value of S) at a specified wave number
(k = 1 cycle km-1).These reference amplitudes are a measure of the rough-
ness of the topography. The mean intercepts for the latitudinal and longitudi-
nal directions for the three regions in Oregon are given in Table 7.1.
Another application of spectral techniques is to well logs. It is common
practice to make a variety of measurements as a function of depth in oil
wells. Typical measurements include the local acoustic velocity, the electri-
cal conductivity, and neutron activation. The measured quantities are ob-
tained as a function of depth so that they are equivalent to a time series and
spectral techniques can be applied.
The power spectral densities obtained from porosity logs for eight wells
in the Gulf of Mexico are given in Figure 7.18 (Pelletier and Turcotte, 1996).
At spatial scales greater than 10 ft a good correlation is obtained with the
fractal relation (7.41) taking P in the range 1.31-1.6. Below this scale the
variability decreases significantly in most wells. This may be attributed to
increased homogeneity within beds. Todoeschuck et al. (1990) have also
considered the fractal behavior of well logs. Leary and Abercrombie (1994)
attribute the shear-wave source and coda-wave displacement spectra ob-
tained from seismograms in the Cajon Pass borehole to scattering that was
observed to obey power-law spectra from well logs.
There are many other examples of measurements in geology and geo-
physics that yield power-law spectra. Brown and Scholz (1985) have carried
out spectral studies of natural rock surfaces. They generally find fractal be-
havior with a relatively large range of variability between 1 < D < 1.6. Sim-
ilar studies have been carried out by Power and Tullis (1991, 1995) and by
Pyrak-Nolte et al. (1995). The observation that fracture surfaces are self-
affine fractals can be used to scale the fluid permeability associated with
fractures.
An interesting question is whether climate obeys fractal statistics (Nico-
lis and Nicolis, 1984). Fluigeman and Snow (1989) have shown that the spa-
tial distribution of oxygen isotope ratios in sea floor cores obey fractal spec-
tral statistics. Since it is generally accepted that the isotope ratios are propor-
tional to the local temperatures, these results can be taken as evidence that
climate obeys fractal statistics. Hsui et al. (1993) have shown that variations
of sea level with time are a self-affine fractal. This is consistent with the
fractal distribution of sedimentary hiatuses discussed in Chapter 2.
An important application of power-law spectra is in interpolating be-
tween measured data sets. Consider the bathymetry of the oceans. Bathyme-
try is typically measured from ships along linear tracks and must be interpo-
lated to make bathymetric charts. This interpolation can make use of the fact
that the bathymetry has a power-law spectrum. The amplitude coefficients
are determined from the applicable fractal relation and the data are used to
determine the phases in a two-dimensional Fourier expansion of the bathym-
etry. This method can also be used to interpolate airborne magnetic surveys.
Hewett (1986) has used fractal techniques to interpolate well-log poros-
ity data from production wells to obtain the full three-dimensional porosity
distribution in an oil field. The horizontal variations in porosity are treated
as a Brownian walk in analogy to the fractal behavior of topography, and the
fractal behavior of the vertical variations are obtained directly from the well
logs. Molz and Boman (1993) have used this technique to interpolate well-
log data to predict the ground water movement and pollutant dispersion ad-
jacent to a waste disposal site.
In some cases a time series x(t) will have a well-defined correlation di-
mension (Grassberger and Procaccia, 1983a, b). A vector for the time series
at t = t, is defined by the quantities x(t,), x(t, + T), x (t, + 27), . . . , x (t, +
n ~ )At
. a later time t = t, another vector is defined by the quantities x (t,), x
(t, + T),x (t, + 27), . . . ,x (t2 + "7). As long as the signals at t, and t, are un-
correlated, the delay T can be small. This process is known as forming n-
tuples and n is the embedding dimension. The cumulative number of pairs of
points N separated by a distance less than r is plotted against r for embed-
ding dimensions n = 2, 3, 4, . . . . If a straight-line correlation is obtained
-
such that N (r, n) rd and if d becomes independent of n, for all values of d
greater than dc,then dc is the correlation dimension. Smith and Shaw (1990)
have applied this technique to sea-floor bathymetry. Cortini and Barton
(1993) analyzed the inflation-deflation patterns of an active volcanic cal-
dera (Campi Flegrei, Italy) as a self-affine time series and were able to make
successful forward predictions. Nicholl et al. (1994) have applied this ap-
proach to the sequence of intervals between eruptions of Old Faithful geyser
in Yellowstone National Park, Wyoming. They conclude that the sequence of
eruption intervals is a chaotic time series. Osbourne and Provenzale (1989)
have obtained values of the correlation dimension for fractional Brownian
walks. These authors find a systematic dependence with dc = 1.140 0.005 +
168 SELF-AFFINE FRACTALS
+
where a, is a reference earth radius, 8 is latitude, is longitude, Cc, and Cs,
are coefficients, and P
, are associated Legendre functions fully normalized
so that
where ko is the wave number and A, = Ilko is the wavelength over which data
are included in the expansion. With A, = 2naOwe have
lo8
10'
Sg
m2 lo6
cycles
km
1 o5
18 28 38 48 58 68 78 88
17 27 37 47 57 67 77 87
16 26 36 46 56 66 76 86
m 15 25 35 45 55 65 75 85
14 24 34 44 54 64 74 84
13 23 33 43 53 63 73 83
12 22 32 42 52 62 72 82
11 21 31 41 51 61 71 81
n
(a) The 64 nm coefficients for an 8 x 8 sub-set of raw data.
8 8 8
7 7 8 8
6 7 7 7 8
5 5 6 7 7 8
4 5 5 6 7 8 8
3 4 5 5 7 7 8
2 3 4 5 6 7 8
2 3 4 5 6 7 8
2 3 4 5 6 7 8
S Figure 7.21. Illustration of
subscript arrangement in
( b ) Equivalent radial coefficients r for various coefficients two-dimensional spectral
s and r in spatial frequency space. analysis.
172 SELF-AFFINE FRACTALS
The two-dimensional mean power spectral density S2j for each radial wave
number k, is given by
where N,is the number of coefficients that satisfy the condition j < r <j + 1
and the summation is carried out over the coefficients Hs, in this range. The
coefficients assigned to each interval for the example given in Figure 7.21(a)
are illustrated in Figure 7.21(b).
The dependence of the mean power spectral density on the radial wave
number k, for a fractal distribution is (Voss, 1988)
instead of (7.41). The addition of minus one to the power is required because
of the radial coordinates that are used in phase space. The dependence of
V(L) on L given in (7.29) is still valid but with the additional dimension the
"box" derivation that follows now gives
for the fractal dimension of the surface instead of (7.33). Similarly, the de-
rivation of the relationship between P and Ha must be reexamined but
Average D
Two-dimensional One-dimensional
Data analysis analysis
Oregon topography
Synthetic topography
174 SELF-AFFINE FRACTALS
At the time of going to press a color version of this figure was available for download from http:llwww.cambridge.org,978052 1567336
SELF-AFFINE FRACTALS 175
At the time of going to press a color version of this figure was available far download from http://www.cambridge.0'g/978052l567336
176 SELF-AFFINE FRACTALS
At the time of going to press a color version of this figure was available for download from hftp://ww.cambridgeeorg/s78O52I567336
178 SELF-AFFINE FRACTALS
The roughness contrasts in the southern basin and range region are also quite
remarkable. The fractal analysis gives a quantitative measure of roughness.
In this chapter we have shown that we typically have D, = 1.5 and D, =
2.5 for self-affine topography. In Chapter 2 we found that the self-similar
fractal dimension of topography is near Dm = 1.25. Kondev and Henley
(1995) have generated synthetic two-dimensional topography and have stud-
ied the self-similar fractal behavior of topographic contours. They argue that
Thus for two-dimensional Brown topography with D2 = 2.5, they would ob-
tain Dss = 1.25, in good agreement with many observations.
Problems
Problem 7.7. Consider the 16-year record of annual rainfall totals for
Phoenix, AR, given in Table 7.3. (a) Determine the mean, variance, stan-
dard deviation, and coefficient of variation for these values. (b) Deter-
mine (RNISN),,for N = 4,8, and 16.
Problem 7.8. Consider the 16-year record of annual rainfall totals for Seattle,
WA, given in Table 7.3. (a) Determine the mean variance, standard devi-
ation, and coefficient of variation for these values. (b) Determine
(RNISN)aV for N = 4, 8, and 16.
Problem 7.10. Derive (7.17) and (7.18), use the relations given in Problem
7.9.
Problem 7.1 1. Obtain an expression for the semivariance y, for the MA
model given in (7.15).
Problem 7.12. The following set of random numbers have a Gaussian distri-
bution with zero mean: -0.4287, -0.0541,0.6224, -0.9545, -0.3745,
0.0455, -1.0512, 0.3431, 0.1318, -0.6346, 0.4436, 0.3743, 0.4589,
1.3667, -0.403 1,O.1154. Use (7.15) to determine a MA time series with
0 = 0.5, j = 0, using these random numbers. Determine the variance of
the time series and compare the result with the predicted value from
(7.16). 1"
Problem 7.13. Derive (7.21). It is appropriate to assume that -CE~(Y;-, -
ni,l
j ) = 0 and -x
1"
ni,l
(y,- 1 - j)2 = 02 as well as the relations given in
Problem 7.9.
Problem 7.14. Obtain an expression for the semivariance y, for the AR
model given in (7.20).
Problem 7.15. Using the set of random numbers given in Problem 7.13, de-
termine an AR time series from (7.20) with +,
= 0.5 and j = 0. Deter-
mine the variance of the time series and compare the result with the pre-
dicted value from (7.21). Determine the auto correlation function for k =
1 , 2 and compare the results with the predicted values from (7.22).
Problem 7.16. Consider the 16 random numbers given in Problem 7.13. De-
termine (RNISJaVfor N = 4, 8, and 16. Determine the best fit value of
Hu from (7.60).
Problem 7.17. The definition of red noise is P = 1. What is the fractal dimen-
sion? How do the variance V and standard deviation a depend upon the inter-
val T? Is red noise an example of a fractional Brownian walk; if so, why?
Problem 7.18. Determine the aspect ratio (height-to-width ratio of the moun-
tains and valleys) using the correlation line from Figure 7.18. From this
182 SELF-AFFINE FRACTALS
GEOMORPHOLOGY
the substitution of (8.1) and (8.2) gives the fractal dimension of a drainage
network as
We now turn our attention to deterministic fractal trees. Three examples are
given in Figure 8.5. To specify the geometry of a deterministic fractal tree,
three quantities must be given: the bifurcation ratio R,, the length-order ra-
tio R , and the angle of divergence 0. And these three quantities are indepen-
dent of order. For the example given in Figure 8.5(a), R, = 3, Rr = 3 , O = 30";
and from (8.4) D = 1 for this fractal tree. For the example given in Figure
8.5(b), R, = 2, Rr = 2, 0 = 60, and again D = 1. And for the example in Fig-
ure 8.5(c), R, = 2, Rr = fi,0 = 90, and D = 2. In all cases the constructions
can be extended to infinite order without overlap. If the construction in Fig-
ure 8.5(c) is extended to infinite order, the plane is entirely covered by the
construction but with no overlap. Thus, this construction is an example of a
self-similar (identical at all scales), deterministic network that can drain
every point on a surface at as small a scale as is specified. This is the impli-
cation of D = 2, the dimension of a plane.
Comparing the drainage network in Figure 8.1 with the fractal trees il-
lustrated in Figure 8.5 shows an important discrepancy. The drainage net-
work has side tributaries whereas the fractal trees do not. First-order streams
intersect other first-order streams to form second-order streams. But other
first-order streams intersect second-order, third-order, and all higher-order
streams. Similarly second-order streams intersect other second-order
streams to form third-order streams. But other second-order streams inter-
sect third-order, fourth-order, and all higher-order streams.
This class of fractal trees can also be quantified in terms of branching ra-
tios To. These are the average numbers of branches of order i joining
branches of order j. Branching ratios are related to branch numbers by
If the primary branching is binary, (8.5) and (8.6) can be combined to give
This is now a two-parameter family of fractal trees. For the fractal tree illus-
trated in Figure 8.6(b) we have a = 1, c = 0 and for the fractal tree illustrated
in Figure 8.6(c) we have a = 1 and c = 2. Substitution of (8.8) into (8.7) gives
n-i n -i
If we divide (8.9) by N and introduce the branching ratios from (8.2) we obtain
n-i Ck-l
2
1=-+ax--
Rb k=l Rkb
Thus large Tokunaga trees have branching ratios that are independent of
order. For the tree illustrated in Figure 8.6(c) with a = 1 and c = 2 we have
Rb + 4 as n + =. And when the length-order ratio Rr is specified the fractal
dimension is given by
For this subclass of Tokunaga fractal trees the branching ratio R, and the
fractal dimension D can be obtained from (8.17) and (8.18) if the length-or-
der ratio Rr is specified. A related quantification of side branching has been
given by Vannimenus and Viennot (1989) and Ossadnik (1992).
We now address the question whether the statistics of actual drainage
networks are represented by Tokunaga fractal trees. Peckham (1995) has de-
termined branching-ratio matrices for the Kentucky River basin in Kentucky
and the Powder River basin in Wyoming. Both are eighth-order basins with
the Kentucky River basin having an area of 13,500 km2 and the Powder
River basin an area of 20,18 1 km2. For the Kentucky River basin the bifurca-
tion ratio is R, = 4.6 and the length-order ratio is Rr = 2.5; for the Powder
River basin the bifurcation ratio is R, = 4.7 and the length-order ratio is Rr =
2.4. The dependence of the number of streams of various orders on their
mean length for the two basins are given in Figure 8.9. Again the results cor-
relate well with the fractal relation (8.4) taking D = 1.85. The branching-ra-
tio matrices for the two river basins are given in Figure 8.10. We now deter-
mine values for Tkby averaging the values of over i
lo5 -
lo4 -
0 Kentucky River
x Powder River
lo3 -
N
lo2
-
Figure 8.9. Dependence of
the number of streams N of
various orders 1-8 on their
10 - mean length r for the
Kentucky River basin in
8 Kentucky and the Powder
1 v n
V
I River basin in Wyoming.
1 6' 1 10 lo2 lo3 The straight-line correlation
r is with (8.4) taking D = 1.85.
192 GEOMORPHOLOGY
1 n-k
Tk = Ti,i +k
(n - k ) i = l
For example, we find that T, = 18 for the Kentucky River basin by taking the
average of T,, = 15.6, T,, = 20.3, T,, = 16.0, and T,, = 20.0. The values of T,
for the two basins are given in Figure 8.11 as a function of k. It is seen that
0 Kentucky River
x Powder River
DLA synthetic network
the results correlate well with (8.8) taking a = 1.2 and c = 2.5. These results
are also tabulated in Table 8.1. At least for these two basins, in quite different
geological settings, good agreement with Tokunaga fractal trees is obtained
with these values of the parameters a and c. It is also of interest to compare
these values with those given in (8.16). With Rr = 2.5 for the Kentucky River
basin and Rr = 2.4 for the Powder River basin, the values from (8.16) are a =
1.5, c = 2.5 and a = 1.4, c = 2.4 respectively. These are in quite good agree-
ment with the best-fit values of a = 1.2 and c = 2.5.
Empirically, actual drainage basins appear to be well approximated by
Tokunaga fractal trees. There are a number of other applications of fractal
trees in geology and geophysics. River deltas are one obvious choice. An-
other is the upward migration of magma beneath a volcano. Partial melting
in the earth's mantle occurs on grain boundaries. Because the melt, the
magma, is lighter than the residual solid, it drains upward eventually reach-
ing the earth's surface, resulting in a volcanic eruption. One approach to the
magma ascent problem is to treat it as a flow in a uniform porous media
(Turcotte and Schubert, 1982, pp. 413-416). An alternative is to treat the
magma paths like a drainage network. Rivelets of magma combine to form
ascending magma streams, and ascending streams of magma combine to
form magma rivers. Hart (1993) has proposed a fractal tree model for the as-
cending magma and has considered its implications on magma composition.
The concepts of fractal trees also have a wide variety of other applica-
tions. Examples include the growth of actual trees and other plants, as well as
the cardiovascular distribution of veins and arteries and the bronchial system.
Returning to drainage networks, another fractal correlation to drainage
patterns is obtained if the length of the principal river in a drainage basin P is
plotted against the area of the basin A. Data for several basins in the north-
eastern United States are given in Figure 8.12 (Hack, 1957). The applicable
fractal relation is
Order R, Rr D a c
and good agreement with the data in Figure 8.12 is obtained taking D = 1.22.
Robert and Roy (1990) have discussed this fractal relation between main-
stream length and drainage area.
It is clear that drainage networks are self-similar and fractal to a good
approximation. But this is basically an empirical statement that does not ad-
dress how drainage networks evolve. It is clear that in most terrains drainage
networks are a direct consequence of erosion. In young terrains tectonic
processes play an important role; however, erosional processes may still be
dominant. Consider the Hawaiian chain of volcanic islands. A young island
such as Hawaii is made up of deterministic conical structures associated
with shield volcanoes. These are not fractal. However, sufficient erosion has
occurred on Maui and Oahu in a few million years to develop an irregular,
scale-invariant morphology that exhibits fractal statistics. The erosional evo-
lution of landscapes is a problem that has fascinated natural scientists for
centuries. The forms of mature landscapes evolve through processes of ero-
sion and deposition. An essential question is whether it is possible to de-
velop a basic theory of landscapes or whether it is necessary to consider only
statistical aspects of the problem.
A variety of models were proposed in the 1960s to describe the statistics
and origins of drainage networks (Smart, 1972). Descriptive models were in-
troduced by Shreve (1966, 1967) and Schreidegger (1967) in which drainage
networks were considered as infinite topologically random networks (i.e., no
one distribution of network links is preferred over any other). They showed
that the statistics of real drainage networks matched the most probable num-
ber-order distribution of a topologically random network. Snow (1989) has
shown that the sinuosity of streams exhibits fractal behavior, and Nagatani
(1993) has shown that meander patterns are fractal. Although these models
have proven useful as a way to describe drainage networks, they contain lit-
tle information on the dynamical processes that form them.
Other workers have proposed random growth models to explain the
planform organization of drainage networks. Leopold and Langbein (1962)
and Schenck (1963) proposed models in which the streams themselves fol-
lowed random walks. Thus the network was not headward growing, but
propagated laterally from the most central "trunk" stream. In addition, the
network grew by the addition of entire stream segments, rather than by grad-
ual expansion (accretion). Howard (1971) introduced an accretionary head-
ward growth model, a site adjacent to the existing network was chosen ran-
domly, and the network propagated to this site. Thus, all sites on the network
had an equal probability for growth.
A growth model that has been applied to stream networks as well as a wide
variety of other applications is diffusion limited aggregation (DLA). How-
ever, before considering DLA we illustrate a deterministic fractal growth
model based on the Koch snowflake. This model is illustrated in Figure 8.13.
An initial unit square at zero order [(Figure 8.13(a)] grows at first order by
the addition of four unit squares at the four corners of the original "seed"
particle, as illustrated in Figure 8.13(b). At second order, four of the first-
order structures are added as shown in Figure 8.13(c). At third order, four of
the second-order structures are added as shown in Figure 8.13(d). We have
No= 1, r o = 1; N, = 5 , r, = 3; N2=25, r 2 = 9 ;N3= 125, r,=27 and from (2.2)
we have D = In 5Iln 3 = 1.465. One approach to quantifying the growth of an
aggregate such as that illustrated in Figure 8.13 is to determine the number
of particles as a function of size. At zero order the number of particles is No =
1 and a circle with radius ro = l l f i coven the particle, at first order the
number of particles is N, = 5 and a circle with radius r, = 3 1 f i coven the
particles, at second order the number of particles is N2 = 25 and a circle with
radius r, = 9 1 f i covers the particles, and at third order the number of parti-
cles is N3 = 125 and a circle with radius r, = 2 7 1 f i . Noting that in this case
-
N rD we again find D = In 51ln 3 = 1.465 just as above.
However, in applications to statistical growth models and to natural phe-
nomena, it is generally preferable to use the "radius of gyration" rather than
the radius of a circle (sphere) that covers the growing aggregate. The defini-
tion of the radius of gyration for an aggregation of N particles growing from
a seed particle in two dimensions is
where ri is the radial distance of particle i from the seed particle. A fractal re-
lation is defined by
where a is the particle size. For the example given in Figure 8.13 we find
that at first order we have the centers of the four accreted particles at a dis-
tance fifrom the seed particle. Thus from (8.21) we have
slightly less than the value D = 1.46 obtained for the basic construction. For
the third-order construction illustrated in Figure 8.13(d) we obtain rg4/a=
12.06 and from (8.22)
"Killing Circle"
n Random Path
"Launching Circle"
sulting fractal structure often resembles DLA clusters (Chen and Wilkinson,
1985; Malay et al., 1985; Nitmann et al., 1985, 1986; Van Damme et al.,
1986; Feder and Jossang, 1995). Sornette et al. (1990) have suggested that
the fractal distributions of faults and joints discussed in Chapter 4 are the re-
sult of a DLA random growth. Two-dimensional surface exposures of frac-
tures and joints are generally fractal with D = 1.7 in agreement with the val-
ues obtained for DLA clusters. DLA models for crack propagation have also
been used to model fragmentation, and power-law number-size statistics for
fragments were obtained (Gomes and Sales, 1993).
-
I I IWI IU-hI I I I I I randomly introduced to an
unoccupied cell. The random
Newly Added Cell walk proceeds until a cell is
encountered with one (and
only one) of the four nearest
Random Walk neighbors occupied
(hatched). The new cell is
+ Prohibited Sltes accreted to the drainage
network. If a random walker
enters a prohibited cell or
0 Other allowed sites wanders off the grid it is
for accretion terminated.
200 GEOMORPHOLOGY
ple shown, 16 cells have been accreted to the seed cells. Cells are allowed to
accrete if one (and only one) of the four nearest neighbor cells is part of the
preexisting network. Prohibited sites that already have two neighboring sites
occupied are identified by stars. Sites available for accretion to the network
are indicated by open circles. A random walker is introduced at a random
cell on the grid and the resulting random-walk path is traced by the solid
line. After 28 random walks it accretes to the network at the cross-hatched
cell. A random walk proceeds until the walker (1) accretes to the network,
(2) exits the grid, or (3) lands on a prohibited cell. In cases (2) and (3) the
walk is terminated and a new walker is introduced on a new, randomly se-
lected site. The iteration of this basic procedure results in a branching net-
work composed of linked drainage cells. This "self-avoiding" algorithm pre-
vents local clumping of drainage cells.
Although the model is highly schematic, the mechanics outlined here are
analogous to the mechanics operating in real drainage systems. The accre-
tionary nature of network growth produces a headward evolving drainage
where T is a characteristic time for erosion. With the initial condition that
h = h, at t = 0 the solution is
this gives
where q ( X J ) is the Gaussian white noise. This is one form of the linear
Langevin equation and is also known as the Edward-Wilkinson equation.
Comparing (8.29) and (8.27) shows that the Culling model gives Brownian-
walk topography if erosion or deposition occurs randomly. An extensive
discussion of growth processes has been given by Barabasi and Stanley
(1995).
A wide variety of models have been proposed to simulate scale-invariant
(fractal) topography and/or river networks. Chase (1992) has combined a
cellular-automata advection model with diffusion and has generated reason-
ably realistic topography with drainage networks. Similar models have been
given by Takayasu and Inaoki (1992) and Lifton and Chase (1992). Leheny
and Nagel (1993) introduced an avalanche model and derived both topogra-
phy and river networks. Meakin et al. (1991) have applied a DLA approach,
and Willgoose et al. (1991) an advective-diffusive model. Kramer and
Marder (1992) have modeled the development of drainage networks on a
water-covered landscape assuming that the erosion is proportional to the
product of velocity and pressure. Barzini and Ball (1993) use a similar
model to develop synthetic braided rivers. They point out that both the simu-
lations and real braided rivers have a fractal distribution of island sizes with
D = 1.
Stark (1991) used an invasion percolation technique, in which the grow-
ing network was superposed on a fixed random field (analogous to a sub-
strate with variable erodibility). At each time step, the network propagated to
the adjacent site having the highest erodibility value over the entire perime-
ter. Although all sites on the network had differing probabilities for growth,
these probabilities did not change through time since the random field was
fixed from the start.
Stark (1994) modeled patterns of erosion using invasion percolation,
Eden growth, and DLA models. Liu (1992) has utilized percolation clusters.
Takayasu (1993) used random self-affme tiling to explain the power-law dis-
tribution of drainage basin sizes. Nikora and Sapozhnikov (1993) have pre-
sented a model based on random walk simulations. Minimum energy
dissipation and/or entropy methods have been applied to landforms and river
networks by Rinaldo et al. (1993) and by Sun et al. (1993,1994,1995). Sor-
nette et al. (1994) have presented a model for the tectonic generation of frac-
tal topography utilizing statistical distributions of displacements on fault
arrays.
208 GEOMORPHOLOGY
8.7 Floods
The volumetric flow Q(t) in a river constitutes a time series. However, the
flow is strongly asymmetric so that Gaussian statistics are generally a poor
approximation. Large values of Q for relatively short periods of time consti-
tute floods, whereas low values of Q for relatively long periods constitute
droughts. Most rivers also have a strong annual periodic component.
Floods present a severe natural hazard; to assess the hazard and to allo-
cate resources for its mitigation it is necessary to make flood-frequency haz-
ard assessments. The integral of the flow in a river is required for the design
of reservoirs and to assess available water supplies .during periods of
drought. An important question in geomorphology concerns which floods
dominate erosion. Is erosion dominated by the 10 year, the 100 year, or the
very largest floods? The answer to this question depends upon whether ex-
treme flood probabilities have an exponential or power-law dependence on
time. One estimate of the severity of a flood is the peak discharge at a station
Q,. The magnitude of the peak discharge is affected by a variety of circum-
stances including (1) the amount of rainfall produced by the storm or storms
in question, (2) the upstream drainage area, (3) the saturation of the soil in
the drainage area, (4) the topography, soil type, and vegetation in the
drainage area, and (5) whether snow melt is involved. In addition dams,
stream channelization, and other man-made modifications can affect the
severity of floods.
To estimate the severity of future floods, historical records are used to
provide flood-frequency estimates. Unfortunately, this record generally cov-
ers a relatively short time span and no general basis has been accepted for its
extrapolation. Quantitative estimates of peak discharges associated with pa-
leofloods are generally not sufficiently accurate to be of much value. A wide
variety of geostatistical distributions have been applied to flood-frequency
forecasts, often with quite divergent predictions. Examples of distributions
used include power law (fractal), log normal, gamma, Gumbel, log Gumbel,
Hazen, and log Pearson.
It is standard practice to use the annual peak discharges in flood-fre-
quency analyses. In the United States this is the peak discharge during a wa-
ter year, which extends from October 1 of the preceding year to September
30. There are serious problems with this approach and alternatives will be
GEOMORPHOLOGY 209
where the Hausdorff measure H a plays a role similar to that in (7.29). The
Hausdorff measure is related to the fractal dimension by (7.33). Since river
discharges have a strong annual variability, the interval T is generally taken
as an integer number of years when floods are considered. This scale-invari-
ant distribution can also be expressed in terms of a flood-frequency factor F,
which is the ratio of the peak discharge over a 10-year period to the peak dis-
charge over a 1-year period. With self-similarity the flood-frequency factor
F is also the ratio of the 100-year peak discharge to the 10-year peak dis-
charge and the ratio of the 1000-year peak discharge to the 100-year peak
discharge. In terms of H a and D we have
peak discharge for each flood is plotted against the log of its assigned period.
This is the same technique that was used for earthquakes in Chapter 4. Re-
sults for station 1-1805 (Goss Heights, MA) are given in Figure 8.26(a). The
solid line is the least square fit of (8.30) with the data over the range 50 <
Q, < 200 m31s; large floods are omitted from the fit because of their small
number. The solid line corresponds to Ha = 0.5 1 and from (8.3 1) we have
F = 3.3. Results for station 11-0980 (Pasadena, CA) are given in Figure
8.26(b), the solid line is the best fit of (8.30) with the data over the range
10 < Q, < 100 m31s. The solid line corresponds to H a = 0.87, and from
(8.31) we have F = 7.4. In both cases the fit to the power-law (fractal) rela-
tion is quite good. The values of Ha and F in California are considerably
larger than in Massachusetts. Large floods are relatively more probable in
the arid climate than in the temperate climate.
Many statistical distributions have been applied to historical records of
floods. Benson (1968) has given six statistical correlations for each of his
ten benchmark stations. His results for the 2-parameter gamma (Ga), Gum-
be1 (Gu), log Gumbel (LGu), log-normal (LN), Hazen (H) and log Pearson
type I11 (LP) are given in Figure 8.26(a) for station 1-1805 and in Figure
8.26(b) for station 11-0980. For large floods the fractal prediction (F)corre-
lates best with the log Gumbel (LGu), whereas the other statistical tech-
niques predict longer recurrence times for very serious floods. The fractal
and log Gumbel are essentially power-law correlations, whereas the others
are essentially exponential.
The values of Ha,D, and F are given for all ten benchmark stations in
Table 8.2. The correlations with the fractal relation (8.30) in Figure 8.26 are
typical of the ten stations. The parameter F is a measure of the relative sever-
ity of flooding. The higher the value of F, the more likely that severe floods
will occur. Our results show that there are clear regional trends in values of
F. The values in the southwest including Nevada (F = 4.13) and New Mexico
( F = 4.27) as well as California ( F = 7.4) are systematically high. These high
values can be attributed to the arid conditions and the rare tropical (mon-
soonal) storm that causes severe flooding. Central Texas (F = 5.24) is also
high and Georgia (F = 3.47) is intermediate. These areas are influenced by
hurricanes. The northern tier of states including Massachusetts (F = 3.26),
Minnesota (F = 2.95), Nebraska ( F = 3.47), and Wyoming ( F = 3.31) range
from low values in the east to intermediate values in the west. Washington
(F = 2.04) has the lowest value of the stations considered; this low value is
consistent with the maritime climate where extremes of climate are rare.
We have also determined the Hurst exponent Hu for the ten benchmark
stations. Values of R/S for T = 5, 10,25, and 50 years (WS = 1 for T = 2 by
definition) are given in Figure 8.27(a) for station 1-1805 (Goss Heights,
MA) and in Figure 8.27(b) for station 11-0980 (Pasadena, CA). Good corre-
lations are obtained with (7.59) taking Hu = 0.67 for station 1-1805 and Hu =
0.68 for station 11-0980. Values of Hu for all ten stations are given in Table
8.1. The values are nearly constant, with a range from 0.66 to 0.73, indicat-
ing moderate persistence. It is not surprising that the values of the Hausdorff
measure Ha differ from the values of the Hurst exponent Hu since the former
refers to the statistics of the flood events and the latter to the statistics of the
running sum.
The results indicate that there is considerable variation of F but very lit-
tle variation in Hu. Mandelbrot and Wallis (1968, 1969b) introduced the
Table 8.2. Values of the Hausdorff measure Ha, fractal dimension D, flood
intensity factor F, and Hurst exponent Hu for the ten benchmark stations
Noah and Joseph effects. The Noah effect is the skewness of the distribution
of flows in a river (or of a non-Gaussian distribution) and the Joseph effect is
the persistence of the flows. It is reasonable to conclude that the variations in
F can be attributed to the Noah effect and the constancy of the Hurst expo-
nent can be attributed to the Joseph effect. An important conclusion is that
R/S analysis is not relevant to flood-frequency hazard assessments.
'T yrs
lo r-
-
-
-
5 -
-
R -
s
-
2 -
Figure 8.27. The rescaled
range (WS) for several
intervals T. (a) Station
1 - 1805. (b) Station 11 -0980.
1 I I I I I I I I I I I The correlations are with
2 5 10 20 50
(7.59) and the Hurst
'$ yrs exponents Hu are given.
214 GEOMORPHOLOGY
8.8 Wavelets
whereflt') is the time series and g [(t' - t)la] is the filter. The filter is cen-
tered at t and a is a measure of the width of the filter. The quantity g (t') is
known as the "mother wavelet." Other wavelets are rescaled versions of the
mother wavelet. The area of each wavelet must sum to zero so that
W ,a ) = ( ) Jrn[I - ( 1 (
e x - 2 a 1 t )dt (8.35)
The mother Mexican hat wavelet is illustrated in Figure 8.28. For more de-
tails of the wavelet transform, the reader is referred to Schiff (1992),
Daubechies (1988), and Young (1992).
To illustrate the application of the wavelet transform we will consider
two streamflow time series (Smith et al., 1997). We first consider a seven-
year, daily discharge record for the Ammonoosuc River, Maine. The record
of daily discharges is given in Figure 8.29(a). The record is characterized by
long-period high flows associated with the annual spring snow-melt events
and by short-period high flows associated with rainstorm events throughout
the year. Wavelet-transform magnitudes of the time series W(a,t) are given in
Figure 8.29(c) for scales a = 1, 2, 4, 8, 16, 32, 64, and 128 days. The trans-
form magnitudes are contoured using a single threshold W= 5.5 X 10-6 m3/s
in Figure 8.29(b) giving a wavelet scalogram. Regions of this a versus t plot
in which the magnitude threshold is exceeded appear in black. The seven an-
nual snow-melt events are clearly illustrated.
As a second example we consider the hourly discharge record for the
Forrest Kerr Creek in northwestern British Columbia, Canada. The 31 1 km2
drainage area contains several large glaciers, and we consider the 100-day
summer 1992 record given in Figure 8.30(a). The record has a strong diurnal
variation associated with the daily melting cycle. Wavelet transform magni-
tudes of the time series W(t,a) are given in Figure 8.30(c) for scales a = 1,2,
4,8, 16,32,64, and 128 hours. The corresponding contoured wavelet scalo-
gram is given in Figure 8.30(b). The strong diurnal signal appears in the a =
8 and 16 hour wavelet transform magnitudes and are clearly illustrated in the
wavelet scalogram. In addition there are melt episodes with a 5-20-day peri-
ods associated with the regional climate.
Problems
Problem 8.1. Consider the fractal tree illustrated in Figure 8.31(a). The
length-order ratio is Rr = 2. (a) Determine the bifurcation ratios for 1st to
2nd, 2nd to 3rd, and 3rd to 4th order branches. What are the correspond-
ing fractal dimensions? (b) Write the branch-number matrix and the
branch-ratio matrix for this tree. (c) Determine T, for k = 1, 2, 3. (d) De-
termine a and c as defined in (8.8). (e) Determine the asymptotic bifur-
cation ratio and fractal dimension for large trees of this form from (8.14)
and (8.15).
Problem 8.2. Consider the fractal tree illustrated in Figure 8.31(b). The
length-order ratio is Rr = 3. (a) Determine the bifurcation ratios for 1st to
2nd, 2nd to 3rd, and 3rd to 4th order branches. What are the correspond-
ing fractal dimensions? (b) Write the branch-number matrix and
branch-ratio matrix for this tree. (c) Determine T, for k = 1,2, 3. (d) De-
termine a and c as defined in (8.8).
Problem 8.3. The asymptotic branching relation (8.14) is not valid for the
fractal tree illustrated in Figure 8.31(b) because it has triple branching
rather than double (binary) branching. (a) Show that the correct relation
is
(b) Determine the asymptotic branching ratio for the fractal tree illus-
trated in Figure 8.3 1(b) and the corresponding fractal dimension.
Problem 8.4. Consider the fractal tree illustrated in Figure 8.31(c). The
length-order ratio is Rr = 3. (a) Determine the bifurcation ratios for 1st to
2nd, 2nd to 3rd, and 3rd to 4th order branches. What are the correspond-
ing fractal dimensions? (b) Write the branch-number matrix and
branch-ratio matrix for this tree. (c) Determine T, fork = 1 , 2 , 3 . (d) De-
termine a and c as defined in (8.8). (e) Determine the asymptotic bifur-
cation ratio and fractal dimension for large trees of this form using the
result given in Problem 8.3.
Problem 8.5. Consider the deterministic fractal growth model illustrated in
Figure 8.32. An initial equilateral triangle at zero order (a) grows at first
order (b) by the addition of three triangles at the three corners of the
original seed particle. At second order (c), three of the first-order con-
struction are added as shown.
(a) What is the fractal dimension of this construction?
(b) What is the radius of gyration at first and second order? What is the
corresponding fractal dimension from (8.22)?
DYNAMICAL
SYSTEMS
We now turn our attention to some examples of deterministic chaos that have
applications in geology and geophysics. There are two requirements for a so-
lution that exhibits deterministic chaos. The first requirement is that we are
solving deterministic equations with specified initial andlor boundary condi-
tions. Thus the applicable equations are deterministic, not statistical. The
second requirement is that solutions that have initial conditions that are in-
finitesimally close diverge exponentially as they evolve. However, before
we consider solutions that are chaotic, some necessary introductory material
is presented. Chaotic behavior is found only for nonlinear systems of equa-
tions. In this chapter some of the standard nomenclature for the study of non-
linear equations is presented (Verhulst, 1990).
Probably the simplest nonlinear total differential equation is the logistic
equation
d x- -
- -X
dt
The solution is
T = &,el
where i,,is again assumed to be small but finite. As time evolves i , ap-
proaches zero. Thus the solutions in the immediate vicinity of the fixed point
-
x = 1 are stable. The stability of the fixed point 3p= 1 is clearly illustrated in
dgure 9.1. For all initial conditions iothe solut~ons"flow" in time toward
the stable fixed point 2, = 1. Also, adjacent solutions tend to converge to-
ward each other. These solutions are not chaotic.
To discuss further the behavior of nonlinear equations we consider the
van der Pol equation
222 DYNAMICAL SYSTEMS
with E the only parameter governing the behavior. In considering the solu-
tions of this second-order nonlinear equation, it is standard practice to intro-
duce the definition of velocity
The E-space is known as the phase space or phase plane. Solutions of (9.17)
follow phase trajectories in this two-dimensional plane with time t as a para-
meter.
We first consider the solution of (9.17) when E = 0. In this case it be-
comes
DYNAMICAL SYSTEMS
Simple harmonic motion is a circle in the phase plane. The radius of the cir-
cle is determined by the initial nondimensional amplitude 2, (or the initial
velocity). The relation (9.19) also represents conservation of energy in this
nondissipative system; it is the sum of the potential and kinetic energies. For
this system the fixed point at x = j = 0 is known as a center. The behavior is
illustrated in Figure 9.2(a).
For finite values of E it is necessary to solve (9.17) numerically. The re-
sult for E = 1 is given in Figure 9.2(b). solutions for all initial conditions
converge toward a limit cycle; this limit cycle is independent of the initial
conditions. The physical reason for this behavior can be seen in the original
van der Pol equation (9.12). For small amplitudes the negative linear damp-
ing term dominates and the amplitude increases. For large amplitudes the
positive cubic damping term dominates and the amplitude decreases. The re-
sult is that all solutions converge on the same limit cycle at large times.
Many sets of equations that produce deterministic chaos for a range of para-
meter values produce limit cycles for other parameter values.
Before considering chaotic solutions we will present some further intro-
ductory material on singular points. Consider the pair of linear total differen- Figure 9.2. (a) Solutions of
the nondimensional van der
tial equations Pol equation (9.14) in the
phase plane with E = 0.The
solution is a circle
representing simple
harmonic motion. The
position of the circle is
dependent upon initial
conditions. (b) Solution of
the van der Pol equation
(9.14) in the phase plane
with E = 1. The solutions
approach a limit cycle
independent of initial
conditions. Solutions for
initial conditions inside the
limit cycle spiral out to it and
solutions for initial
conditions outside the limit
cycle spiral into it.
224 DYNAMICAL SYSTEMS
y = yx"
and the fixed point is stable for f < 0. Since a and f must have opposite signs,
one singular solution will be stable and the other singular solution will be
unstable.
We next substitute b = 1, c = - 1, and a =f = a in (9.22) with the result
dy-- x + ay
-
du ax-y
x = p cos 0
y = p sin 0
giving
p=
9.2 Bifurcations
When p is negative there are no real fixed points, and when p is positive
there are two real fixed points. The transition at p = 0 from no solutions to
two solutions is known as a turning point bifurcation. We examine the stabil-
ity of the two real roots by linearization. We substitute
Thus the fixed point x = pl/2 is stable: solutions as they evolve in time con-
verge to it. The fixed point x = - p"2 is unstable: solutions as they evolve in
time diverge from it. The corresponding bifurcation diagram is given in Fig-
ure 9.4(a). This figure illustrates the meaning of the word bifurcation, to
split into two branches. This figure also shows that for x < -p1I2all solu-
tions diverge to x = - = and for - pl/2 < x < + = all solutions converge to
the stable fixed point x = ~ 1 1 2 .
We now turn to a modified form of the logistic equation (9.1)
Note that this equation is invariant under the transformation x' = - x . Thus
solutions are symmetric in x and fixed point must appear or disappear in
pairs. The fixed points of (9.37) obtained by setting dxldt = 0 are
When IJ, is negative there is a single real fixed point x = 0, and when p, is
positive there are three fixed points x = 0, 2 ~ ' 1 2 The
. transition at p = 0
from one to three solutions is known, for obvious reasons, as a pitchfork bi-
furcation (although the word trifurcation would be more appropriate). A sta-
bility analysis shows that for (I < 0 the solution x = 0 is stable. For b > 0
this solution is unstable but the other solutions are stable. The corresponding
bifurcation diagram is shown in Figure 9.4(c). For < 0 all solutions con- Figure 9.4. (a) Illustration of
a turning point bifurcation
occurring at y = 0. The
stable and unstable fixed
points of (9.32) are given as
a function of y. (b)
Illustration of a transcritical
bifurcation and exchange of
stabilities occurring at y = 0.
no solutions The stable and unstable fixed
points of (9.36) are given as
a function of y. (c)
Illustration of a supercritical
I ... _
, UnSlrble branch
pitchfork bifurcation
1 ----_ occurring at y = 0. The
stable and unstable fixed
points of (9.36) are given.
The transition is from a
single stable branch for
y < 0 to three branches, two
stable and one unstable, for
y > 0. (d) Illustration of a
subcritical pitchfork
bifurcation occurring at y =
unnUble branch 0. The stable and unstable
fixed points of (9.39) are
given. The transition is from
three branches, one stable
and two unstable, for y < 0
to a single unstable branch
for y > 0.
228 DYNAMICAL SYSTEMS
verge to the stable fixed point x = 0. For p > 0 all solutions for x > 0 con-
verge to the stable fixed point x = pllz and all solutions for x < 0 converge to
the stable fixed point x = - p"2.
An example of a subcritical pitchfork bifurcation is given by the equa-
tion
x = p cos 0 (9.42)
y = p sin 0 (9.43)
These equations have the fixed point solution p = 0 (x = y = 0); it is stable for
y < 0 and unstable for y > 0. In addition, for y > 0, solutions of (9.44) and
(9.45) converge to a circular limit cycle given by
These solutions are illustrated in Figure 9.5. For y < 0, all solutions spiral
into the stable fixed point p = 0. For p > 0, solutions for p > p1/2 spiral into
circular limit cycle given by (9.46): solutions for p < y112 spiral outward to
this circular limit cycle. The transition from a stable branch for y < 0 to a
stable limit cycle for y > 0 is a Hopf bifurcation. The van der Pol equation
(9.14) also undergoes a Hopf bifurcation at E = 0.
Problems
Problem 9.1. For b = c = 0 in (9.20) and (9.21) solve for y(t) and x(t) directly.
Show that these solutions reduce to (9.24).
Problem 9.2. Derive (9.30) from (9.27)-(9.29).
Problem 9.3. Solve (9.36) in the vicinity of the three fixed points.
Problem 9.4. Derive (9.44) and (9.45) from (9.40) and (9.4 1).
Problem 9.5. Consider the equation
dx
Problem 9.7. Consider the equation -= x - x3
dt
(a) What are the fixed points?
IT 3IT
if x = x, at t = 0.
LOGISTIC M A P
10.1 Chaos
This is a recursive relation that determines the sequence of values x,, x,,
x,, . . . . An initial value, x,, is chosen; this value is substituted into (10.1) as
232 LOGISTIC M A P
The fixed points x, of this equation are obtained by settingfix,) = x, with the
result
This is equivalent to setting xn+,= x,. The two fixed points obtained by solv-
ing (10.3) are
This is the slope of the functionfix) evaluated at the fixed point x,. If Irl < 1,
where (r( is the absolute value of r, the fixed point is attracting (stable), but
if Irl > 1 the fixed point is repelling (unstable). For the logistic map from
(10.2) we find that
For positive values of a we find that the fixed point at x, = 0 is stable for 0 <
a < 1 and unstable for a > 1. The fixed point x, = 1 - a-1 is unstable for 0 <
a < 1, stable for 1 < a < 3, and unstable for a > 3.
We next examine a sequence of iterations of the logistic map (10.1). As
our first example we consider the iteration for a = 0.8 as illustrated in Figure
10.1. The curve represents the functionflx) given by (10.2) for a = 0.8. Tak-
ing x, = 0.5 we draw a vertical line; its intersection with the parabolic curve
gives x, = 0.2. A horizontal line drawn from this intersection to the diagonal
line of unit slope transfers xn+, to xn.A vertical line is drawn to the parabola
giving x, = 0.128. Further iterations give x, = 0.0892928, x, = 0.06505567,
etc. The sequence iterates to the stable fixed point xf = 0. All iterations converge
to x, = 0 for 0 < x, < 1. As our next example we consider two iterations for
The limit cycle oscillates between x,, and x,. As an example of the period
n = 2 limit cycle, we consider the iteration for a = 3.1 given in Figure 10.3.
The iteration from x, = 0.1 approaches the limit cycle that oscillates between
x,, = 0.558 and x, = 0.765. The n = 2 limit cycle occurs in the range 3 < a <
3.449479. At a = 3.449479 another flip bifurcation occurs and the period
the range 3.449499 < a < 3.544090.At larger values of a higher-order limit
cycles are found. They are summarized as follows:
where n is the period of the limit cycle and k is the number of flip bifurca-
tions that have occurred. Period-doubling flip bifurcations occur at a se-
quence of values a,, where a , = 3, a, = 3.449499, a3 = 3.544090, a, =
3.564407, a, = 3.568759, a, = 3.569692, a, = 3.569891, a, = 3.569934, etc.
In the region 3.569946 < a < 4 windows of chaos and multiple cycles occur.
The values of a, approximately satisfy the Feigenbaum relation
Thus the initial values of the period-doubling sequence can be used to pre-
dict the onset of chaotic behavior at a=. Taking a , = 3 and a, = 3.449499, we
find that a_ = 3.572005 from (10.12).Taking a, and a, = 3.544090, we find
a_ = 3.569870. Taking a, and a, = 3.564407, we find a_ = 3.569944. Taking
a, and a, = 3.568759, we find a_ = 3.569945. These are clearly converging
on the observed value of am= 3.569946.
We now turn to the behavior of the logistic map in the region of cha-
otic behavior. An example illustrating chaotic behavior is given in Figure
10.5 with a = 3.9; one thousand iterations are shown and no convergence
to a limit cycle is observed. The behavior is space filling (chaotic) but the
range of values of xn is well defined. The maximum value is obtained taking
xn = 0.5 with the result xn+, = 0.975. The minimum value is obtained tak-
LOGISTIC M A P 237
ing x,, = 0.975 with the result x n + , = 0.0950625. Thus we have for a = 3.9 that
0.0950625 < x,, < 0.975. For a = 4 the logistic map (10.1) becomes
Provided P is not a rational number, the values of x,, jump around randomly
and fully chaotic behavior is obtained.
The route to chaos and the windows of chaotic behavior of the logistic
map are illustrated in the bifurcation diagram given in Figure 10.6. The
asymptotic, large n, behavior of the map is illustrated for 2.9 < a < 4.0. At
Taking a = 3.9 we have xfmax = 0.975, which is in agreement with the exam-
ple given in Figure 10.5. The minimum value of x, is obtained by substitut-
ing (10.17) into (10.1) with the result
where dxn is the incremental difference after the nth iteration if dxo is the in-
cremental difference in the initial value. If the Lyapunov exponent is nega-
tive, adjacent solutions converge and deterministic solutions are obtained. If
the Lyapunov exponent is positive, adjacent solutions diverge exponentially
and chaos ensues. To determine the Lyapunov exponent, we consider the in-
cremental divergence in a single iteration by writing (10.1) in the form
where f(x) is the functional form of the mapping; for the logistic map it is
given by (10.2). Since
Xn+1 = f ( x n )
1 "
A = lim
rn+ m
-
m n=O
log,
1H.1
240 LOGISTIC M A P
where log, is the logarithm to the base 2. The Lyapunov exponents A for the
logistic map (10.1) are given in Figure 10.7 for a range of values for a. The
windows of chaotic behavior for 3.569946 < a < 4 where A is positive are
clearly illustrated. The Lyapunov exponent goes to zero at each flip bifurca-
tion as shown.
Consider as a particular example the iteration for a = 4 given by (10.16).
For this case we find
and
I
2hn = [sin (TITP)cos ( T n P ) 2n
sin IT^ cos I$
-
Although the coefficient is variable, the growth with n as n + requires
that A = 1. Thus the Lyapunov exponent for this special case is unity and the
iteration is fully chaotic.
The role of the Lyapunov exponent is clearly illustrated by the simple
linear map
The only singular point is at x, = 0 and it is stable if a < 1 and unstable if a >
1. Illustrations of the iteration of this linear map for a = 0.6, 1.2 and x, = 0.8
are given in Figure 10.8. With a = 0.6 we have x , = 0.48, x, = 0.288, x, =
0.1728, and x, = 0.10368, and the solution iterates to the stable fixed point
x, = 0. With a = 1.2 we have x , = 0.96, x, = 1.152, x, = 1.3824, and x, =
1.65888, and the solution iterates to x + oo. If there is an incremental differ-
ence in x,, 6x0, the incremental difference in x , , 6 x , , is 6 x , = a 6x0; similarly
the incremental difference in x,, sx,, is ax, = a s x , = a2 6xo. This can be gen-
eralized to give
A = - log a
log 2
Thus the Lyapunov exponent A is positive for a > 1 and adjacent solutions
diverge; the Lyapunov exponent is negative for 0 < a <1 and adjacent solu-
tions converge.
We next consider the triangular or tent map defined by
with 0 < a < 1 and 0 < x < 1 . This map can also be defined by
1
x n + , = 2a xn for 0 < xn < -
2
(10.33)
For 0 < a < $ the only fixed point is at x, = 0 and it is stable. For $ < a < 1
2a
there are fixed points at x, = 0, x, = ;both fixed points are unstable.
(1 + 2a)
Illustrations of the iteration of this map for a = 0.4, 0.8, and x, = 0.8 are
given in Figure 10.9. With a = 0.4 we have x , = 0.16, x2 = 0.128, x3 = 0.1024,
and x4 = 0.08192, and the solution iterates to the stable fixed point x, = 0.
With a = 0.8 we have x , = 0.32, x2 = 0.512, x, = 0.7808, and x4 = 0.35072.
The iteration for a = 0.8 is chaotic. All iterations with 0.5 < a < 1 are
chaotic; this can be demonstrated by noting that the Lyapunov exponents for
the triangular map are easily obtained from the Lyapunov exponents for the
linear map given in (10.31); the result is
A = - log 2a
log 2
Thus A is negative for x < a <0.5 and positive for 0.5 < a < 1. In the chaotic
regime, the range of values of x is 2a (1 - a ) < x < a. The resulting bifurca-
tion diagram for the triangular map is given in Figure 10.10. For 0 < a < 0.5
the solutions converge to the stable fixed point x, = 0. For 0.5 < a < 1.0
chaotic behavior is found between the limits given above.
The recursive maps considered in this chapter are clearly very simple
models with limited direct applicability to problems in geology and geo-
physics. However, the complex chaotic behavior exhibited by the simple
models is strongly indicative that many natural systems can be expected to
behave chaotically. Some natural systems exhibit behavior that closely re-
sembles the behavior of recursive maps. As an example Sornette et al.
( 1991) and Dubois and Cheminee (1991) have treated the return periods for
eruptions of the volcanoes Piton de la Fournaise on Reunion Island and
Mauna Loa and Kilauea in Hawaii as return maps. The results appear to re-
semble the chaotic maps considered in this chapter.
Problems
Problem 10.1. Determine x i , x,, x,, and x, for the logistic map (10.1) taking
a = 0.5 and xo = 0.5. What is the value of x,?
Problem 10.2. Determine x , , x,, x,, and x, for the logistic map (10.1) taking
a = 0.9 and x, = 0.75. What is the value of x,?
Problem 10.3. Determine x , , x,, x,, and x, for the logistic map (10.1) taking
a = 2 and xo = 0.2. What is that value of x,?
Problem 10.4. Determine x , , x,, x,, and x, for the logistic map (10.1) taking
a = 2.5 and xo = 0.3. What is that value of x,?
Problem 10.5. Determine x,, and xn for the logistic map (10.1) taking a =
3.2.
Problem 10.6. Determine x,, and x, for the logistic map (10.1) taking a =
3.4.
Problem 10.7. For a = 3.7 the logistic map (10.1) is fully chaotic. What are
the maximum and minimum values of xn?
Problem 10.8. For a = 3.8 the logistic map (10.1) is fully chaotic. What are
the maximum and minimum values of xn?
244 LOGISTIC M A P
Problem 10.9. Determine x,, x,, x,, x,, and x, for the logistic map (10.1) tak-
inga = 4 and p = ( 2 ~ ) - 1 .
Problem 10.10. Determine x,, x,, x,, x,, and x, for the logistic map (10.1) tak-
inga = 4 and p = ( 3 ~ ) - 1 .
Problem 10.1 1. Show that x, = 0 is a fixed point of the linear map (10.28).
Determine the value of r defined in (10.6) for this map and determine
the stability of the fixed point.
Problem 10.12. Determine the fixed points for the triangular points for the
triangular map (10.32). Determine the values of r defined in (10.6) for
the fixed points and determine their stability.
Problem 10.13. Consider the iterative map
This map is also used for population dynamics (May and Oster, 1976).
(a) Determine the fixed points and the range of positive values for a that
are stable.
(b) Determine x,, x,, x,, x,, and x, taking a = 3 and x, = 0.5.
(c) For a = 3, what are the maximum and minimum values of xn?
Chapter Eleven
SLIDER-BLOCK
MODELS
Figure 1 1 . 1 . Illustration of
the slider-block model for
fault behavior. The constant
velocity driver extends the
spring until the force ky
exceeds the static friction
force Fs.
246 SLIDER-BLOCK MODELS
where rn is the mass of the block and Fd is the dynamic or sliding friction.
The sliding is analogous to an earthquake and relieves the stress in the spring
in analogy to elastic rebound. The further assumption is made that the load-
ing velocity of the driver, v, is sufficiently slow so that we may assume it to
be zero during the sliding of the block. This is reasonable since an earth-
quake lasts only a few tens of seconds, whereas the interval between earth-
quakes on a fault is typically hundreds of years or more.
The static-dynamic friction law is the simplest that generates stick-slip
behavior. A necessary and sufficient condition for stick-slip behavior is that
the static friction exceeds the dynamic friction, F, > F,. A variety of empiri-
cal velocity-weakening friction laws are in agreement with laboratory obser-
vations and also generate stick-slip behavior. Dynamic instabilities associ-
ated with complicated friction laws are well known from single-block
models (Byerlee, 1978; Dieterich, 1981; Ruina, 1983; Rice and Tse, 1986).
Slider-block models have been used to simulate foreshocks, aftershocks,
pre- and post-seismic slip, and earthquake statistics (Dieterich, 1972; Run-
dle and Jackson, 1977; Cohen, 1977; Cao and Aki, 1984, 1986). Gu et al.
(1984) found some chaotically bounded oscillations; Nussbaum and Ruina
(1987) used a two-block model with spatial symmetry and found periodic
behavior. Huang and Turcotte (1990a, 1992) and McCloskey and Bean
(1992) studied the same system without spatial symmetry and obtained clas-
sic chaotic behavior.
We first consider the solution for the behavior of the single block shown
in Figure 11.1. It is convenient to introduce the nondimensional variables
Y = - +
4 (1 - -t) COST
Sliding ends at T = .rr when dY1d.r is again zero. When the velocity is zero the
friction jumps to its static value, preventing further sliding. The position of
the block at the end of sliding is Y = (2/+) - 1 so that the slip during sliding
is
The dependence of Y and d Y l d ~on T during sliding are given in Figure 11.2
for 4 = 1.25. For this case Y drops from 1 to 0.6 during sliding and AY = -0.4.
After sliding is completed, the spring extends due to the velocity of the driver
until Y again equals unity and the cycle repeats. With a single slider block
periodic behavior is obtained. The variables Y and dY1d.r define a phase
plane for the solution.
We next consider the behavior of a pair of slider blocks as illustrated in
Figure 11.3. We will show that the behavior of the blocks can be a classical
example of deterministic chaos. The blocks are an analog of two interacting
faults or two interacting segments of a single fault. A constant velocity driver
drags the blocks over the surface at a mean velocity v. The two blocks are
coupled to each other and to the constant velocity driver with springs whose
constants are kc, k , , and k,. Other model parameters are the block masses m ,
and m, and the frictional forces F, and F,. The position coordinates of the
blocks relative to the constant velocity driver are y , and y,. The static condi-
tions for the onset of sliding are the force balances
(11.10)
(11.11)
(11.12)
-
decoupled and each will exhibit the periodic behavior described above for a
single block. As a + the blocks become locked together and act as a sin-
gle block that again exhibits the periodic behavior described above. If P = 1
there is complete symmetry between the two blocks, P # 1 introduces an
asymmetry. In terms of the nondimensional variables (1 1.3) and parameters
(1 1.12), the sliding conditions (1 1.8) and (1 1.9) become
The blocks are expected to exhibit stick-slip behavior for c$ > 1. The first
block will begin sliding if (11.13) is satisfied, and the second block will be-
gin to slide if (11.14) is satisfied. Together (11.13) and (11.14) define a fail-
ure envelope in the Y, Y,-plane. The sliding behavior is governed by (1 1.15)
and (1 1.16). In some cases the sliding of one block induces the sliding of the
second block.
The solutions for the behavior of the blocks can be represented in a four-
dimensional phase plane consisting of Y,, Y2, dY,ld~,and dY,ld~.For sim-
plicity we consider the projection of the solution onto the Y, Y2 plane.
We first consider the symmetric case in which both blocks have the same
frictional behavior, that is, P = 1. An example with a = 3 and y = 1.25 is
given in Figure 11.4. The diagonal lines converging at Y, = Y2 = 1 are the
failure envelope given by (1 1.13) and (1 1.14). A periodic orbit is given by
abcd in Figure 11.4. At point a block 2 fails with Y, = 0.780 and Y2 = 0.835.
During the slip of block 2, block 1 remains fixed and the slip of block 2 is
represented by the vertical line ab in the Y,Y2 phase plane. The termination
of the sliding of block 2 is obtained from (11.16). Sliding of block 2 termi-
nates at point b where Y, = 0.780 and Y2 = 0.735. From point b to point c the
blocks stick and the springs extend due to the movement of the constant-ve-
locity driver. The increments in Y, and Y, are equal and the strain accumula-
tion phase is represented by the diagonal line bc, which has unit slope. The
termination of this strain occurs when this line intercepts the failure enve-
lope. This occurs at point c, where Y, = 0.865 and Y2 = 0.820. During the slip
of block 1, block 2 remains fixed and the slip of block 1 is represented by the
horizontal line cd. The termination of the sliding of block 1 is obtained from
(1 1.15). Sliding of block 1 terminates at point d, where Y, = 0.765 and Y2 =
0.820. Between points d and a the blocks stick and the springs again extend
at equal rates due to the movement of the constant-velocity driver. The incre-
ments in Y, and Y2 are equal and the strain accumulation phase is represented
by the diagonal line ad, which has unit slope. The termination of this strain-
accumulation phase occurs when this line intercepts the failure envelope.
This occurs at point a, and the cycle repeats. The behavior of this symmetri-
cal two-block model is periodic, with first one block sliding and then the
other.
We next consider an asymmetric case with p = 2.5, a = 3.49, and = +
1.25. The results are given in Figure 11.5. The behavior is similar to that
given in Figure 10.5 and is fully chaotic. The curves that fall outside the fail-
ure envelope are cases in which both blocks are sliding simultaneously.
When a diagonal strain-accumulation line intercepts the upper failure enve-
lope, block 2 begins to slide. Because P is relatively large, the failure force
for block 2 is considerably larger than the failure force for block 1. Thus the
vertical failure path for block 2 crosses the failure envelope of block 1 and it
begins to slide. The sliding of both blocks results in S-shaped curves. The
next strain-accumulation phase intercepts the lower failure envelope and
block 1 begins to slide. Because of the large force required to induce the fail-
ure of block 2, these horizontal failure paths do not cross the upper failure
envelope. The result is a chaotic sequence of failures of both blocks together,
followed by a failure of the weaker block 1 .
To study this behavior further a bifurcation diagram is given in Figure
11.6. The values of Y, - Y , at the termination of slip are given for various
+
values of a with p = 2.25 and = 1.25. A detailed illustration of the behav-
ior in the range 3.2 < a < 3.5 is given in Figure 11.7. Solutions evolve to an
asymptotic, large-time behavior independent of the initial conditions. As il-
lustrated in Figures 11.6 and l l .7, the system may evolve to limit cycle be-
havior or chaotic behavior. A series of period-doubling pitchfork bifurca-
tions are clearly illustrated in Figure 11.7. The cyclic behavior for the n = 2
limit cycle obtained for a= 3.25 is given in Figure 11.8. The cyclic behavior
for the n = 4 limit cycle obtained for a = 3.38 is given in Figure 11.9. These
limit cycles evolve into the type of chaotic behavior illustrated in Figure
11.5. The behavior of the asymmetric two-block model is remarkably similar
to that of the logistic map.
The values of the Lyapunov exponents corresponding to the points given
on the bifurcation diagram in Figure 11.6 are given in Figure 1 1.10. The win-
dows of chaotic behavior are clearly illustrated.
tion of the Nankai trough along the coast of southwestern Japan. The relative
motion between the plates has resulted in a sequence of great earthquakes
that have been documented through historical records for the period AD
684-1946. The sequence is marked by an irregular but somewhat repetitive
pattern in which whole section failures occur following several alternate
failures of single segments. In the two-block model the simultaneous slip of
both blocks corresponds to an earthquake that ruptures the entire section,
and single-block failures correspond to an earthquake on a single segment.
Taking P = 1.05 and a = 0.81, Huang and Turcotte ( 1 9 9 0 ~ found) chaotic
model behavior that strongly resembled the observed sequence of earth-
quakes in the Nankai trough.
Another example is the interaction between the Parkfield segment and
the rest of the south central locked segment of the San Andreas fault in Cali-
fornia. A sequence of magnitude-six earthquakes occurred on the Parkfield
segment in 1881, 1901, 1922, 1934, and 1966. The last great earthquake on
the locked segment to the south occurred in 1857 and is also associated with
a rupture on the Parkfield segment. Taking P = 2 and a = 1.2, Huang and
Turcotte ( 1 9 9 0 ~ )found chaotic model behavior similar to that described
above. A sequence of slip events on the weaker block often preceded the si-
multaneous slip of the weaker and stronger blocks. The model simulation
suggested two alternative scenarios for a great southern California earth-
quake following a sequence of Parkfield earthquakes. In the first case a
Parkfield earthquake will transfer sufficient stress to trigger the great south-
ern California earthquake; the Parkfield earthquake is thus essentially a fore-
shock for the great earthquake. In the second case a small additional strain
after a Parkfield earthquake will trigger an earthquake on the southern sec-
tion and this will result in an additional displacement on the Parkfield sec-
tion. The evolution of the system is chaotic: its evolution is not predictable
except in a statistical sense.
Spring-block models are a simple analogy to the behavior of faults in the
earth's crust. However, the chaotic behavior of low-dimensional analog sys-
tems often indicates that natural systems will also behave chaotically. Thus it
is reasonable to conclude that the interaction between faults that leads to the
fractal frequency-magnitude statistics discussed in Chapter 4 is an example
of deterministic chaos. The prediction of earthquakes is not possible in a de-
terministic sense. Only a probabilistic approach to the occurrence of earth-
quakes will be possible.
Problems
+
Problem 11.1. Consider a single slider block with = 1.5. (a) At what value
of Y does slip occur? (b) What is the value of Y after slip?
SLIDER-BLOCK MODELS 255
+
Problem 11.2. Consider a single slider block with = 3. (a) At what value of
Y does slip occur? (b) What is the value of Y after slip?
Problem 11.3. For a single slider block determine the dependence of V =
dYldT on Y during slip.
+
Problem 11.4. Consider a pair of slider blocks with a = 0, P = 1, and = 2.
Assume that initially Y, = 0.5, Y2 = 0. (a) What are the values of Y, and
Y, when block 1 first slips? (b) What are the values of Y, and Y, after
block 1 slips? (c) What are the values of Y, and Y2 when block 2 first
slips? (d) What are the values of Y, and Y2 after block 2 slips? (e) Draw
the behavior of the system in the Y, Y2 phase plane.
Problem 11.5. Consider a pair of slider blocks with a = 0, P = 1, and 4 = 413.
Assume that initially Y, = 0.75, Y2 = 0.5. (a) What are the values of Y,
and Y, when block 1 first slips? (b) What are the values of Y, and Y2 after
block 1 slips? (c) What are the values of Y, and Y, when block 2 first
slips? (d) What are the values of Y, and Y2 after block 2 slips? (e) Draw
the behavior of the system in the Y, Y2 phase plane.
EQUATIONS
Sets of coupled nonlinear differential equations can also yield solutions that
are examples of deterministic chaos. The classic example is the Lorenz
equations. Lorenz (1963) derived a set of three coupled total differential
equations as an approximation for thermal convection in a fluid layer heated
from below. He showed that the solutions in a particular parameter range had
exponential sensitivity to initial conditions and were thus an example of de-
terministic chaos. This was the first demonstration of chaotic behavior. The
Lorenz equations have been studied in detail by Sparrow (1982).
Because of their historical significance and because thermal convection
in the earth's mantle drives plate tectonics, we will consider the Lorenz
equations in some detail. When a fluid is heated its density generally de-
creases because of thermal expansion. We consider a fluid layer of thickness
h that is heated from below and cooled from above; the cool fluid near the
upper boundary is dense and the fluid near the lower boundary is light. This
situation is gravitationally unstable. The cool fluid tends to sink and the hot
fluid tends to rise. This is thermal convection.
Appropriate forms of the continuity, force balance, and energy balance
equations are required for a quantitative study of thermal convection. We
will restrict our attention to two-dimensional flows in which the velocities
are confined to the xy-plane. Continuity of fluid requires that
(12.5)
The density difference Ap in the buoyancy term of the vertical force the
equation (12.3) is related to this temperature difference by
258 LORENZ EQUATIONS
The problem has been reduced to the solution of two partial differential equa-
tions for q!~and 8.
To better understand the roles of various terms, it is appropriate to intro-
duce the nondimensional variables
where Ra is the Rayleigh number and Pr is the Prandtl number. The Rayleigh
number is a measure of the strength of the buoyancy forces that drive con-
vection relative to the viscous forces that damp convection. The higher the
Rayleigh number the stronger the convection. The Prandtl number is the ra-
tio of the momentum diffusion to the heat diffusion. It is instructive to esti-
mate these two parameters for the earth's mantle. Due to solid-state creep,
the earth's mantle has a mean viscosity of around = 1021 Pa s, its thickness
is h = 2880 krn, and the temperature increase across it is estimated to be T2 -
TI = 3000 K. For the rock properties we taken K = 1 mm2s-1 and a = 3 X
10-5K-1. We assume g = 10 m s-2 and an average density p = 4000 kg m-3
LORENZ EQUATIONS 259
and find Ra = 8.6 X lo6 and Pr = 2.5 X 1023, both very large values. The be-
havior of the earth's mantle will be discussed further in the next chapter.
We now return to the basic equations. Substitution of (12.11) to (12.13)
into (12.9) and (12.10) gives
The solution is determined by the two parameters Ra and Pr and the bound-
ary conditions. For small values of the Rayleigh number, the viscous forces
are sufficiently strong to prevent any flow. Thus there is a critical minimum
value of the Rayleigh number for the onset of thermal convection.
We next consider a linearized stability analysis for the onset of convec-
tion as given by Rayleigh (1916). Only terms linear in 8 and 3 are retained
and the marginal stability problem is solved by setting dlat = 0. Thus (12.14)
and (12.15) become
275
t j= sin (T) sin
0= sin rrj
27 7r4
Ra, = ---- = 657.5
4
This is the critical Rayleigh number for the onset of thermal convection in a
fluid layer heated from below. At Rayleigh numbers less than that given by
(12.21), thermal convection will not occur. The nondimensional wavelength
corresponding to (12.21) is
This is the wavelength of the initial convective flow that takes the form of
counter-rotating, two-dimensional cells. Each cell has a width 21/2h, one-half
the wavelength of the initial disturbance.
A pitchfork bifurcation occurs at the critical Rayleigh number. If Ra <
Rac the only solution is the conduction solution, which is stable. Above the
critical Rayleigh number the conduction solution remains a solution of the
governing equations but it is now unstable. Above the critical Rayleigh num-
ber there are two stable convective solutions corresponding to cellular rolls
rotating either clockwise or counterclockwise. This is identical to the pitch-
fork bifurcation illustrated in Figure 9.4(b).
Because the applicable equations are linear, the stability analysis does
not predict the amplitude of the convection. It is not possible to specify the
value of $, in (12.18) and (12.19). To determine the amplitude of the thermal
convection, it is necessary to retain nonlinear terms. One approach to the so-
lutions of the full nonlinear equations (12.14) and (12.15) is to expand the
variables and 8 in double Fourier series in i and j with coefficients that
are functions of time. Lorenz (1963) strongly truncated these series and re-
tained only three terms of the form
1
$ = j q (4 + i 2 ) ~ ( rsin
) (T)
2 6
sin ~7
7r3
= -(4
4~~
+ X2l3 C ( I ) sin 27rJ - 21i2Z3(r) cos (9)
2 -
sin ~ j ] (12.24)
where
with Rac given by (12.20). These equations satisfy the same set of boundary
conditions that (12.18) and (12.19) satisfy. The expansion of the stream
function (12.23) is essentially identical to the form used in the linear stabil-
ity analysis (12.18). However, the expansion of the temperature (12.24) in-
cludes an additional term that is not dependent on i .
It is necessary to derive differential equations for the time dependence of
the coefficients A(T), B(r), C(T). This is done by substituting the expansions
(12.23) and (12.24) into the governing equations (12.14) and (12.15). Coef-
ficients of the Fourier terms are equated to obtain the necessary equations.
262 LORENZ EQUATIONS
All nonlinear terms in the stream function equation (12.14) are ne-
glected. When (12.23) and (12.24) are substituted into this equation the re-
sult is
where
The three first-order total differential equations (12.27), (12.30), and (12.31)
are the Lorenz equations. These equations would be expected to give accu-
rate solutions to the full equations when the Rayleigh number is slightly su-
percritical, but large errors would be expected for strong convection because
of the extreme truncation.
Solutions of the Lorenz equations represent cellular, two-dimensional
convection. Because only one term is retained in the expansion of the stream
function, the particle paths are closed and represent streamlines even when
the flow is unsteady. The time dependence of the coefficient A determines
the velocity of a fluid particle. But the fluid particle follows the same closed
trajectory independent of its time variation. The coefficient B represents
temperature variations associated with the stream function mode A. The co-
LORENZ EQUATIONS 263
and substituting into (12.27), (12.30), and (12.31) with linearization gives
the characteristic equation
This equation has one real negative root and two complex conjugate roots
when r > 1. If the product of the coefficients of A2 and A equals the constant
term we obtain
At this value of r the complex roots of (12.38) have a transition from nega-
tive to positive real parts. This is the critical value of r for the instability of
steady convection and represents a subcritical Hopf bifurcation. If Pr > b + 1
264 LORENZ EQUATIONS
the steady solutions given by (12.33) and (12.34) are unstable for Rayleigh
numbers larger than those given by (12.39).
To examine further the behavior of the Lorenz equations it is necessary
to carry out numerical solutions. Following Lorenz (1963)we consider h, =
i.
8112, the critical value from (12.22), so that b = For these values the steady-
state solution given by (12.23), (12.24), (12.33), and (12.34) becomes
= 2 [24(r - I)]''~sin
(Z)
- sin 71.y
3 = E 4{ ( r - I) sin 2?rj
[y
T -
(
r - 1)
21/2
cOs (.Dli) }
Ty (12.41)
This steady-state solution is valid if r > 1 and is less than the critical value
given by (12.39).
As the Rayleigh number increases above r = 1, the strength of the con-
vection increases, as indicated by (12.40). This results in larger transport of
heat by convection, and as a result the thermal gradients at the upper and
lower boundaries increase. The Nusselt number is a measure of the effi-
ciency of the convective heat transfer across the layer. The Nusselt number
Nu is the ratio of the heat transferred by convection to the conductive value
without convection. In terms of our nondimensional variables it is given by
The essential feature of the solution of the Lorenz equations in this param-
eter range is deterministic chaos. One consequence of the deterministic
chaos of the Lorenz equations is that solutions that begin a small distance
apart in phase space diverge exponentially. With essentially infinite sensitiv-
40 60
time
(c)
Problems
Problem 12.1. Show that the critical Rayleigh number given by (12.20) has
the minimum value as given by (12.21) and (12.22).
Problem 12.2. For the steady-state solution of the Lorenz equations given in
(12.40) and (12.41), determine an expression for the mean horizontal ve-
locities on the boundaries at y = 0, 1.
ChapterThirteen
IS MANTLE
CONVECTION
CHAOTIC?
The Lorenz equations are a low-order expansion of the full equations applic-
able to thermal convection in a fluid layer heated from below. For the range
of parameters in which chaotic behavior is obtained, the low-order expan-
sion is not valid; higher-order terms should be retained. Nevertheless, the
chaotic behavior of the low-order analog is taken as a strong indication that
the full equations will also yield chaotic solutions. Numerical solutions of
the full equations are strongly time dependent for high Rayleigh number
flows; these solutions appear to be turbulent or chaotic.
It is generally accepted that thermal convection is the primary means of
heat transport in the earth's mantle. Heat is produced in the mantle due to the
decay of the radioactive isotopes of uranium, thorium, and potassium. Heat
is also lost due to the cooling of the earth. The surface plates of plate tecton-
ics are the thermal boundary layers of mantle convection cells. The plates
are created by ascending mantle flows at ocean ridges. The plates become
gravitationally unstable and founder into the mantle at ocean trenches (sub-
duction zones). Intraplate hot spots such as Hawaii are attributed to mantle
plumes that ascend from the hot unstable thermal boundary layer at the base
of the convection mantle.
An important question with regard to the earth is whether mantle con-
vection is chaotic. The earth's solid mantle behaves as a fluid on geological
time scales because of thermally activated creep. The discussion in the pre-
vious chapter considered only a constant viscosity. This is a poor approxima-
tion for the earth's mantle because the dependence of strain on stress is al-
most certainly nonlinear and is an exponential function of temperature and
pressure. Also, the Boussinesq approximation is not applicable because of
the significant increase in density with depth (i.e., pressure). Nevertheless,
calculations assuming a linear stress-strain relation and constant fluid prop-
erties can provide important insights. In the previous chapter we estimated
that the Rayleigh number for mantle convection is near 107 and the Prandtl
number is larger than 1023. The latter is such a large value that it is appropri-
ate to assume that the mantle has an infinite Prandtl number. Because the
270 IS MANTLE CONVECTION CHAOTIC?
Prandtl number for the mantle is so large, the momentum terms on the left-
hand side of the momentum equations (12.2) and (12.3) can be neglected.
Thus the only nonlinear terms are those in the energy equation (12.4). The
question is whether these terms can generate chaotic behavior and thermal
turbulence.
The first question we address is whether the Lorenz equations yield
chaotic solutions in the limit Pr -. In this limit (12.27) requires
This result is then substituted into (12.15). The lowest consistent order of
truncation beyond that used by Lorenz is m = 2 for the expansion in x (m = 0,
1,2) and n = 4 for the expansion in y (n = 1,2,3,4). This truncation yields a
set of 12 ordinary differential equations for the time dependence of the tem-
perature coefficients,., that can be written
IS MANTLE CONVECTION CHAOTIC? 271
It is necessary to take the resolution in the vertical direction twice compared with
the horizontal direction to resolve the convection terms in the energy equation.
,,,, ,,,, ,,,,
The time evolution of the 12 coefficients 8 8 8,,,, 8,,,, 8 8,,,, el,,,
8 a,,,, 8,,,, 8,,? is found by integrating numerically the 12 equations
given by (13.5). The time evolution can be thought of as trajectories in a 12-
dimensional phase space. It is convenient to project the 12-dimensional tra-
jectories onto the two-dimensional phase space consisting of 8,,, and 8,,,;
these correspond to the fundamental mode and the first subharmonic. There
are two parameters in this problem, the Rayleigh number, Ra or r, and the
wavelength. In this discussion solutions are given only for the critical value
of the wavelength h = 2312.
At the subcritical Rayleigh numbers 0 < r < 1 (0 C Ra < 657.512), the
only fixed point of the solution is at the origin and it is stable; there is no
flow. For higher Rayleigh numbers, the two fixed points corresponding to
clockwise and counterclockwise rotations in the fundamental model o , , , be-
--
r
I--
0 --
0
_ 'V
'
/
d
M e - - - - - - - - --- - ' 4 8
$ 1 (pure
'--
Q2, (mixed)
I
\
-.
%
(a)
Problems
Problem 13.1. Show that the temperature coefficient B in the Lorenz expan-
sion (12.24) is related to TI,,, by
For Ra = 104 and h = 8112 compare the value of B from the three-mode
(Lorenz) expansion with the value from the 12-mode expansion.
Problem 13.2. Show that the temperature coefficient C in the Lorenz expan-
sion (12.24) is related to %o,2 by
A
For Ra = 104 and = 8112 compare the value of C from the three-mode
(Lorenz) expansion with the value from the 12-mode expansion.
Chapter Fourteen
RIKITAKE DYNAMO
magnetic field has been in its present (normal) orientation; between 0.72 and
2.5 Myr ago there was a period during which the orientation of the field was
predominantly reversed. Clearly one characteristic of the core dynamo is
that it is subject to spontaneous reversals.
A question that can be asked is whether the reversals give a fractal distri-
bution of polarity intervals. This question has been addressed by Gaffin
(1989) and his results are given in Figure 14.2. The number of polarity inter-
vals N ( T ) of length greater than T is given as a function of T. For intervals
between 300,000 yr and 50 Myr a good correlation with the fractal relation
(2.6) is obtained taking D = 1.43.
No detailed theory exists for the behavior of the core dynamo. The vis-
cosity of the liquid outer core is sufficiently small that the flow is undoubt-
edly turbulent. Thus the patterns of flow, electrical currents, and magnetic
fields are very complex. Because of this complexity, relatively simple disk
dynamos have been proposed as analog models. Rikitake (1958) proposed
the symmetric two-disk dynamo illustrated in Figure 14.3. It is composed of
two symmetric disk dynamos in which the current produced by one dynamo
energizes the other. Equal torques G are applied to the two dynamos in order
to overcome ohmic losses. Rikitake (1958) found that this dynamo was sub-
ject to random reversals of the magnetic field, but it was much later (Cook
and Roberts, 1970) that it was demonstrated that the Rikitake dynamo be-
haved in a chaotic manner.
where R is the resistance in either circuit and M is the mutual inductance be-
tween the current loops and the electrically conducting disks.
An electrical current I in a magnetic field B results in the electromotive
force F = I X B per unit length of current path. The interaction between the
magnetic field B, and the radially inward electric current I, results in a
torque in the clockwise direction. In the steady state this torque balances the
applied torque G and is given by
The discussion given above is also applicable to the current loop and rotat-
ing disk on the left in Figure 14.3.
~'2
y2 = ( g ) 1 / 2 R 2 A
, = ( g ) 1 / 2 R o ,A
, = (---)
GLM
1/2
where
The plus and minus signs refer respectively to the normal and reversed states
of the magnetic field.
Stability calculations (Cook and Roberts, 1970) have shown that the sin-
gular points given above are unstable for all parameter values. Their numeri-
cal solutions for p. = 1 and K = 2 are given in Figures 14.4 and 14.5. The sin-
gular points X2 = ?;, Y, = 4 are shown in the X2Y, phase plane illustrated in
Figure 14.4. The strange-attractor behavior of the solution is very similar to
that of the Lorenz equations given in Figures 12.l(a), (b). The time evolution
of the solutions, given in Figure 14.5, is also similar to that of the Lorenz
equations given in Figure 12.l(c). Oscillations grow in one polarity of the
field until it flips into the other polarity.
Extensive studies of the behavior of the Rikitake dynamo equations have
been carried out by Ito (1980) and by Hoshi and Kono (1988). The behavior
is found to be periodic or chaotic. A map of these two behaviors in the K - p
parameter space is given in Figure 14.6. The transition from periodic to
chaotic behavior follows the period-doubling route to chaos previously ob-
tained for the logistic map. A bifurcation diagram for K = 2 illustrating the
period doubling is given in Figure 14.7. Marzocchi et al. (1995) have studied
the reversal statistics of the Rikitake dynamo and found that they are not
similar to the reversals of the earth's magnetic field. This is not surprising
since the Rikitake dynamo is a low-order system and the earth's dynamo
must be a very high-order system.
CHAOS
versals of the earth's magnetic field. Again this can be taken as evidence that
the dynamo action in the core is chaotic. It is certainly desirable to consider
higher-order systems that better simulate the "turbulent" interactions be-
tween electrical currents and flows of the electrically conducting fluid. A
start in this direction has been given by Glatzmaier and Roberts (1995).
yo
1
Figure 14.9. Illustration
of100 iterations of the
third-order recursive map
(14.21) with a = 2.75 and
x, = 0.2. The sequence of
chaotic reversals is clearly
shown.
288 RlKlTAKE DYNAMO
Problems
RENORMALIZATION
GROUP METHOD
15.1 Renormalization
In the first eight chapters of this book we considered the fractal behavior of
natural systems. This behavior was generally statistical and the physical
causes were generally inaccessible. In the six chapters that followed we con-
sidered low-order dynamical systems that exhibit chaotic behavior. Because
of the low order, the examples are generally quite far removed from natural
systems of interest. In this chapter and the next we take a collective view of
natural phenomena and consider some applications in geology and geo-
physics.
Thermodynamics represents the standard approach to collective phe-
nomena. System variables are defined, that is, temperature, pressure, den-
sity, entropy; and the evolution of these variables is determined from the first
law of thermodynamics (conservation of energy) and the second law of ther-
modynamics (variation of entropy). Statistical mechanics provides the ratio-
nal microscopic basis for much of thermodynamics.
In general, neither thermodynamics nor statistical mechanics yields frac-
tal statistics or chaotic behavior. Exceptions include critical points and phase
changes. A characteristic feature of a phase change is a discontinuous (cata-
strophic) change of macroscopic parameters of the system under a continu-
ous change in the system's state variables. For example, when water freezes
its viscosity changes from a very small value to a very large value with no
change in temperature.
The renormalization group method has been used successfully in treat-
ing a variety of phase change and critical-point problems (Wilson and
Kagut, 1974). This method often produces fractal statistics and explicitly
utilizes scale invariance. A relatively simple system is considered at the
smallest scale; the problem is then renormalized (rescaled) to utilize the
same system at the next larger scale. The process is repeated at larger and
larger scales. This is very similar to our renormalization models for frag-
RENORMALIZATION GROUP METHOD
Figure 15.1.(a)A16 X 16
array of square elements.
The probability p, that an
element is permeable is 0.5;
either the dark or the light
elements can be assumed to
be permeable. For either
case, no permeable path
across the array is found.
(b) Illustration of the
renormalization group
method; four square
elements are considered at
each of the four scales.
RENORMALIZATION GROUP METHOD 291
Thus there is a critical value of p,, p*, for the onset of flow through the grid
of elements. If p, is less than this critical value p*, a large square grid will al-
most certainly be impermeable to flow. If p, is greater than this critical value
p*, a large square grid will almost certainly be permeable to flow.
It may be easier to visualize this problem if one considers a forest made
up of a square grid of trees. The probability that a grid point has a tree is p,.
The question is whether a forest fire can burn through the forest if a tree can
ignite only its nearest neighbors. If there are no nearest neighbors the fire
does not spread. This forest-fire problem is mathematically identical to the
percolation problem considered above.
We now turn to the percolation problem and consider in some detail the
16 X 16 array of square elements illustrated in Figure 15.l(a). The total
number of elements n = 256. Taking p, = 0.5, it has been randomly deter-
mined whether each element is permeable or impermeable. With p, = 0.5 ei-
ther the dark squares or the light squares can be taken to be permeable. In ei-
ther case no continuous permeable path is found either horizontally or
vertically. Using the Monte Carlo approach a large number of random real-
izations would be carried out and the probability P(p,) that the array is per-
meable would be determined. For the two-dimensional square array with
large n, numerical simulations find that the critical probability for the onset
of flow in the array is p* = 0.59275 (Stauffer and Aharony, 1992).
It is also of interest to consider the number-size statistics for percolation
clusters at the critical limit p, = p*. The size of a percolation cluster is de-
fined to be the number of permeable elements in contact with each other
when the array first becomes permeable. The number of elements in the per-
colation cluster n: has been determined numerically as a function of the ar-
ray size n. For two-dimensional arrays it is found that (Stauffer and Aharony,
1992)
This result can be compared with the deterministic Sierpinski carpet assum-
ing that the remaining squares illustrated in Figure 2.3 represent a percola-
tion cluster. For the Sierpinski carpet ne = 8 when n = 9 and ne = 64 when n =
8 1, thus
Comparison of (15.1) and (15.2) shows that the fractal dimension for the
percolation cluster at the critical limit is D = 91/48 = 1.896, which is very
close to the value for a Sierpinski carpet D = In 81111 3 = 1.893.
At the onset of percolation, the sites through which flow takes place are
known as the percolation backbone. Sahimi et al. (1992, 1993) explained the
292 RENORMALIZATION GROUP METHOD
probability that all four elements are permeable is p; and there is only one
configuration as shown in Figure 15.2(e): it is permeable (+).
Taking into account all possible configurations the probability that the
first-order cell is permeable is given by
The first-order probability includes the two configurations with two perme-
able elements, the four configurations with three permeable elements, and
the single configuration with four permeable elements. Renormalization is
carried out and four first-order cells become second-order elements. After
renormalization exactly the same statistics are applicable to the second-
order cell with the result
This result can be applied to the nth-order cell with the result
This recursive relation for the probability is quite similar to the logistic map
considered in Chapter 10.
,
Figure 15.3 shows the dependence of p,, on pn given in (15.5). To con-
sider the fixed points it is appropriate to rewrite (15.5) as
In the range 0 < x < 1 there are three fixed points x = 0,0.618, 1. The corre-
sponding values of A = dfldx are 0, 1S28, 0. The fixed points at x = 0 and 1
are stable since IAl < 1 but the fixed point at x = 0.618 is unstable.
To illustrate further the iteration of the probabilities given by (15.5) we
consider two specific cases. For po = 0.4 we find p , = 0.294, p, = 0.166, and
p, = 0.054 as illustrated in Figure 15.3. The construction is the same as that
used in Figures 10.1-10.5. As the iteration is continued to large n the proba-
bility p,, approaches the stable fixed point p,, = 0. A large two-dimensional
array is impermeable forp, = 0.4. Forp, = 0.8 we findp, = 0.870, p, = 0.941,
and p, = 0.987 as illustrated in Figure 15.3. As the iteration is continued to
large n the probability p,, approaches the stable fixed point pn = 1. A large
two-dimensional array is permeable for p, = 0.8.
The unstable fixed point at p* = 0.618 is a critical point. At the critical
point, p,, = p* for all values of n; the probability that a cell is permeable is
scale invariant. For probabilities smaller than the critical value 0 < p, < p*
the iteration is to the impermeable limit pn = 0. For probabilities greater than
the critical value p* < p, < 1 the iteration is to the permeable limit pn = 1.
The value p* = 0.618 obtained by the renormalization group method com-
pares with p* = 0.59275 obtained by the direct numerical simulations for
large arrays.
The two-dimensional renormalization group method can be based on a
larger lowest-order array. Consider a 3 X 3 array with nine elements. Taking
into account all possible configurations, the probability p,,, that the (n +
1)th-order cell is permeable is related to the probability pn that the nth-order
cell is permeable by
ical probability for the onset of flow in the array is p* = 0.3 117 (Stauffer and
Aharony, 1992). Again, a fractal relation of the form (15.2) is obtained be-
tween the number of permeable elements ne in the critical percolation cluster
and the total number of elements n with D = 2.5. This result can be compared
with the deterministic Menger sponge illustrated in Figure 2.4(a), assuming
that the remaining cubes represent a percolation cluster. For the Menger
sponge ne = 20 when n = 27 and ne = 400 when n = 729; the value D = 2.727
for the Menger sponge is somewhat higher than the value for three-dimen-
sional percolation clusters.
The simplest renormalization group model for the array of cubic ele-
ments is a 2 X 2 X 2 cubic array of eight elements. Taking into account
all possible configurations, the probability that the (n + 1)th-order cell
is permeable is related to the probability that the nth-order cell is perme-
able by
The critical value for the onset of permeability is p* = 0.282. This is in rea-
sonably good agreement with the numerical results considering the simplic-
ity of the model.
For fluid flow through rocks the two measurable quantities are the
porosity (the degree to which void space becomes filled with fluid) and the
permeability (the ability of the fluid to flow through the rock under fluid
pressure). The highly idealized model considered above predicts that there
will be a sudden onset of permeability at a critical value of the porosity. Al-
though rocks with low porosity have essentially zero permeability, the sud-
den onset of permeability at a universal critical value of the porosity is not
observed. This is attributed to the variety of aperture sizes and lengths occur-
ring in natural systems, which is not fully described by the idealized model.
Problems associated with electrical conduction through a matrix of ele-
ments are essentially identical to the percolation problems considered
above. Madden (1 983) has applied the renormalization group method to the
onset of electrical conductance through a grid of electrical conductors and
insulators.
ities the probability p , that a first-order cell is fragile is related to the proba-
bility po that a first-order element is fragile by
After renormalization exactly the same statistics are applicable at higher or-
ders. Thus we can write
If the characteristic size of the first-order cell is 2h, then the characteristic
size of the nth-order cell is 2nh.
Figure 15.6 shows the dependence of pn+!on pn given in (15.11). To con-
sider the fixed points it is appropriate to rewrite (15.1 1) as
In the range 0 < x < 1 there are three fixed points x = 0,0.896, 1. The corre-
sponding values of A = dfldx are 0, 1.766, 0. The fixed points at x = 0 and 1
are stable since IAl < 1 but the fixed point at x = 0.896 is unstable.
In writing (15.15) to (15.17) the transfer of the force (stress) when an ele-
ment fails has not been considered.
If one element fails and the other is unbroken it is necessary to determine
whether the second element will fail when the stress from the first element is
transferred to it. We introduce the conditional probability p,, that an unbro-
ken element already supporting a force F will fail when an additional force F
is transferred to it. This mechanism for stress transfer leads to induced fail-
ures. The probabilities that the [ub] state will be broken or unbroken under
stress transfer are given by
From (15.14) and (15.17) the probability that a zero-order cell fails, p , , is
given by
where p0(2F) is the probability of failure under a force 2F. For the quadratic
Weibull distribution given in (15.14) we have
po(2F) = 1 - exp
[ - (31
The substitution of (15.14) into (15.22) gives
Combining (15.21) and (15.23) the conditional probability for the quadratic
Weibull distribution is given by
Substitution of (15.24) into (15.20) gives the probability that a cell fails, p , ,
in terms of the probability that the element fails, p,:
After renormalization exactly the same statistics are applicable to the sec-
ond-order cell, with the result
This result can be applied to the nth-order cell, with the result
Again this recursive relation resembles the logistic map considered in Chap-
ter 10.
Figure 15.8 shows the dependence of p,,, on pn given in (15.27). To
consider the fixed points it is appropriate to rewrite (15.27) as
302 RENORMALIZATION GROUP METHOD
In the range 0 < x < 1 there are three fixed points x = 0,0.2063, 1. The cor-
responding values of A = dfldx are 0, 1.6 l9,O. The fixed points at x = 0 and 1
are stable since IAl = 1 but the fixed point at x = 0.2063 is unstable.
To illustrate further the iteration of the probabilities given by (15.27), we
consider two specific cases. For p, = 0.1 we find p , = 0.05878, p, = 0.02184,
and p, = 0.00322 as illustrated in Figure 15.8. The construction is the same
as used in Figures 10.1-10.5. As the iteration is continued to large n, the
probability p,, approaches the stable fixed point p _ = 0. The large fractal tree
does not fail for p, = 0.1. For p, = 0.6 we find p , = 0.8093, p, = 0.9615, and
p, = 0.9985 as illustrated in Figure 15.6. As the iteration is continued to large
n, the probability p,, approaches the stable fixed point pm = 1. The large frac-
tal tree fails for p, = 0.6.
The unstable fixed point at p* = 0.2063 is a critical point. For probabili-
ties smaller than the critical value 0 < p, < p* the iteration is to the unbro-
ken limit pm = 0. For probabilities greater than the critical value p* < p, < 1
the iteration is to the broken limit p_ = 1 . The value p* = 0.2063 corresponds
to the catastrophic failure of the fractal tree.
Because of the transfer of stress, all elements fail when the probability of
Figure 15.8. Dependence of
failure of an individual element is only 0.2063. From (15.14) this corre-
the probability pn+,of failure
at order n + 1 on the
probability pn of failure at
order n from (15.27) for cells
containing two asperities
with a quadratic Weibull
distribution of strengths. The
procedure described in the
text for determining the
probability of cell failure for
successive iterations is
illustrated for p, = 0.6,O. 1.
The critical probability of
failure p* gives the
bifurcation point for
catastrophic failure of the
system. If 0 < p, < p*, the
solution iterates to pn = 0
and no failure occurs. If
p* < p, < 1 , the solution
iterates to pn = 1, and the
system has failed.
RENORMALIZATION GROUP METHOD 303
where p(r,) is the probability that failure will occur in a time less than tf and
v(u) is known as the "hazard rate" under stress a. One-half of a large collec-
tion of wires under stress a will have failed when t,,, = (In 2)lv. It is often ap-
propriate to assume that the dependence of the hazard rate on stress is given by
where v, is the hazard rate under stress a, and p is typically in the range 2-5.
Combining (15.30) and (15.31) gives
The rate at which elements fail is assumed to be given by the rate law
The cumulative Benioff strain in northern California prior to the October 17,
1989, Loma Prieta earthquake is given in Figure 15.9(a) (Bufe and Varnes,
1993). The increase in the Benioff strain illustrated in this figure fits the ex-
ponential scaling given in (15.36) quite well, but there also appears to be a
periodic component. Sornette and Sammis (1995) have also considered these
data and concluded that there is an excellent fit to a log-periodic increase in
seismic activity.
A simple power-law (fractal) increase in the cumulative Benioff strain B
is given by
where t, - t is the time prior to the earthquake and the constant a is negative.
To obtain log-periodic behavior, we assume that exponent a is complex: a =
5 + iq. In this case we obtain
= (1 - 8' [. $1
cos 1.(1 -
where 2 stands for the real part. This is log-periodic behavior. Combining
(15.39) and (15.40) a generalized self-similar expression for the cumulative
Benioff strain takes the form
where C, specifies the amplitude and 6 the phase of the log-periodic compo-
nent. This result is fully self-similar.
306 RENORMALIZATION GROUP METHOD
This result is also valid for the more general form of log-periodic behavior
given in (15.41). Because of the basic scale invariance, the value of t, ob-
tained from (15.43) is independent of the origin of the time scale used.
Sornette and Sammis (1995) used the generalized log-periodic relation
(15.41) to fit the strain accumulation data of Bufe and Varnes (1993); the re-
sult is given in Figure 15.9(a). The data appear to exhibit a series of log-peri-
odic fluctuations. An important question is whether this type of strain accu-
mulation data can be used to predict earthquakes. Sornette and Sammis
(1995) used the log-periodic fit to the data available prior to a cutoff date in
Figure 15.9(a) to predict when an earthquake would be expected. Their re-
sults are given in Figure 15.9(b). The predictions became increasingly accu-
rate as the cutoff times approached the date of the Loma Prieta earthquake,
October 17, 1989. Although the ability to make this retrospective prediction
is encouraging, it remains to be demonstrated that this technique can be used
successfully to predict earthquakes.
As a further illustration of log-periodic behavior, we now consider a hi-
erarchical model for failure similar to that discussed above but including the
time-to-failure approach (Newmann et al., 1995). An array of stress-carrying
elements is considered analogous to the strands of an ideal, frictionless ca-
ble. Each element has a time-to-failure that is dependent on the stress the
element carries and has a statistical distribution of values. When an element
fails, the stress on the element is transferred to a neighboring element; if two
adjacent elements fail, stress is transferred to two neighboring elements; if
four adjacent elements fail, stress is transferred to four neighboring ele-
ments; and so forth. The hierarchical model for failure is illustrated in Figure
15.10. At the lowest order in this example there are 128 zero-order elements.
These elements are paired to give 64 first-order elements, the 64 first-order
elements are paired to give 32 second-order elements, and so forth. A statis-
tical distribution of lifetimes is assigned to the lowest-order elements. When
one of these elements fails, the stress on the element is transferred to the
neighboring element, increasing the stress on it. If a pair of zero-order ele-
ments fail, that is, a first-order element, the stress is transferred to the adja-
cent pair of zero-order elements, that is, to the adjacent first-order element,
and so forth.
To illustrate the stress transfer consider the second-order (n = 4) exam-
ple given in Figure 15.1 1. Each element is given a probabilistic "lifetime"
and two examples of failure are illustrated. At time t = 0 the stress a, is ap-
308 RENORMALIZATION GROUP METHOD
plied to the four elements. In both examples element "2" has the shortest
lifetime and it is the first to fail. The stress uo on element "2" is transferred
to element "1" placing a stress 2u0 on this element as illustrated in Figure
15.11 (ii). The question now is whether the enhanced stress on element "1"
will cause it to fail prior to elements "3" or "4." In example (a) element "1"
is the next to fail and the stress 2u, on this element is transferred to elements
"3" and "4" placing a stress 2a0 on both of these elements. Element "4" is
the next to fail and the stress 20, on it is transferred to the last surviving ele-
ment "3," which has a stress 4a0. In example (b) element "4" is the second
element to fail and the stress a, on this element is transferred to element "3"
placing stress 2a0 on this element. Element "3" is the next to fail and the
stress 20, is again transferred to the last surviving element "1," which has a
stress 4a0.
The zone of stress transfer is equal in size to the zone of failure. This lo-
cal load-sharing model simulates the elastic redistribution of stress adjacent
to a rupture. If elements are subjected to a constant stress a at t = 0, (15.32)
gives the statistical distribution of failure times t,. However, with stress
transfer the stress is not necessarily constant. To accommodate the increase
in stress caused by local load sharing from failed elements, we introduce a
reduced time to failure for each element Ti, given by
Each element i is assigned a random time to failure ti, under stress a, based
on (15.30). The actual time to failure of element i, namely Ti, is reduced be-
low t, if stress is transferred to the element. The time Ti, is obtained by re-
quiring that (15.44) be satisfied.
Consider the example illustrated in Figure 15.11(a). The four elements
i = 1, 2, 3, 4, carrying stress a, are assigned failure times t,,, t,,, t,,, and t4,
using the probability distribution (15.30). Element "2" has the shortest fail-
ure time so that
Upon failure of "1," the stress 2ao is transferred to elements "3" and "4," as
illustrated in Figure 15.11(a) (iii). Element "4" is the next element to fail and
its failure time T4, is given by
310 RENORMALIZATION GROUP METHOD
Upon the failure of "4," the stress a, is transferred to element "3," as illus-
trated in Figure 15.11(a) (iv). The time to failure of element "3" is given by
Alternative failure sequences are also possible, one example of which is il-
lustrated in Figure 15. l l(b). Again "2" is the first element to fail; however,
in this case the second element to fail is "4," then "3" fails, and finally "1"
fails.
Results of a numerical calculation using a 16th-order (n = 65,536) real-
ization of this model are given in Figure 15.12. The total failure sequence is
given in Figure 15.12(a). The nondimensional time is taken to be T = v,t and
failure in this case occurs at T = 0.048027. It is interesting that failure occurs
at a nondimensional time that is more than an order of magnitude shorter
than the mean time to failure of an individual element T,,, = 0.61315. The
lifetime of the composite material is much shorter than the mean lifetime of
individual elements. This is in agreement with the results obtained above us-
ing the renormalization group approach.
The failure sequence between T = 0.0445 and total failure is expanded in
Figure 15.12(b). There is a well-defined sequence of partial failures prior to
the total failure at T~ = 0.048027. Well-defined partial failures occur at T , =
0.047965, T~ = 0.047799, T~ = 0.047487, T~ = 0.047 162, and T , = 0.046124.
The failure sequence between T = 0.04745 and T = 0.04785 is further ex-
panded in Figure 15.12(c) to show the structure of the partial failures at T =
0.047792 and T = 0.047487. In each case there is a nested sequence of
higher-order partial failures. Further expansion would show higher orders of
nesting. The structure is basically self-similar or fractal. There is a scale-
invariant sequence of precursory failures at all levels. Because of the sto-
chastic nature of the model, the embedding is not always clear, and a partic-
0.047636
3
1st estimate
0.048627
0.054775 True
3
2nd estimate failure
0.047799
time
0.047487
0.048154
0.047799 3
3rd estimate
0.047965
RENORMALIZATION GROUP METHOD 313
portant to note, however, that the quality of the fit deteriorates as complete
failure is approached. The global analysis employed in the derivation of
(15.36) deteriorates owing to the increasing importance of localization in the
evolution of the cascade of failures.
The sequence of failures as a function of position on the linear array of
elements is shown in Figure 15.14 for the above realization. The precursory
cascades of failure are clearly illustrated. This figure illustrates the growing
importance of localization in failure events as criticality is approached.
Problems
Problem 15.1. A unit square is divided into 16 smaller squares of equal size
and the four central squares are removed; the construction is repeated.
Assume that the remaining squares represent a percolation cluster and
determine ne for n = 16 and 256.
Problem 15.2. Determine the equivalent expression to (15.2) for a cubic ar-
ray; use the Menger sponge as an example.
Problem 15.3. Consider the 3 X 3 renormalization group approach to the
two-dimensional array of square elements. For this array the tabulation
(permeable elements, impermeable elements, alternative configurations,
and permeable configurations) is: (0,9, 1, O), (1,8,9, O), (2,7,36, O), (3,
6, 8431, (4,5, 126,22), (5,4, 126,591, ( 6 3 , 84,67), (7,2,36,36), (8,
1,9,9), (9,0, 1, 1). Derive equation (15.8).
Problem 15.4. Consider the 2 X 2 X 2 renormalization group approach to
the three-dimensional array of cubic elements. For this array the tabula-
tion (permeable elements, impermeable elements, alternative configura-
tions, and permeable configurations) is: (0,8, 1, O), (1,7,8, O), (2,6,28,
41, (3,5,56,24), (4,4,70,54), (5,3,56,56), (6,2,28,28), (7, 1, 8, 81,
(8,0, 1, 1). Derive equation (15.9).
Problem 15.5. Assuming the dark elements in Figure 15.1 are permeable,
what is the size of the largest percolation cluster?
Problem 15.6. Assuming the light elements in Figure 15.1 are permeable,
what is the size of the largest percolation cluster?
Problem 15.7. Derive the conditional probability given by (15.21).
Problem 15.8. Derive (15.23) from (15.14) and (15.22).
Problem 15.9. Consider the third-power Weibull distribution given by
Find the value of the unstable fixed point p* and the corresponding value
of FIFO.
Problem 15.10. Consider the fourth-power Weibull distribution given by
Find the value of the unstable fixed point p* and the corresponding value
of FIFO.
Problem 15.11. Consider the rate law for failure given in (15.34). Assume
v = v, a constant. Show that the time where n, = 31 NOis T,,, = (In 2)lv0.
Problem 15.12. Consider the failure relation given in (15.36). Show that the
time to failure is given by t, = (pvo)-P.
Problem 15.13. Derive (15.43) from (15.42).
Problem 15.14. Show that (15.43) is invariant to a change in the origin of
time. Substitute t, = ti + to, t, = t; + to, etc. and show that the primed
times also satisfy (15.43).
Problem 15.15. If the first three maxima in a log-periodic sequence are t,, t,
and t,, (a) show that the fourth maxima in the sequence t, is given by
(b) Show that this result is invariant to a change in the origin of time,
substitute t, = ti + to, t, = ti + to, etc. and obtain the same result.
Problem 15.16. Assume that a series of events satisfy log-periodic behavior
leading to a catastrophic event. The first three events occur at t, = 0, t, =
15 days, and t, = 25 days. Determine t, and t, (use the result obtained in
Problem 15.15).
Problem 15.17. Assume that three earthquakes that occurred in 1956.2,
1980.7, and 1994.5 are precursors to a great earthquake. Also assume
that log-periodic behavior is applicable. Determine when the fourth
earthquake in the precursory sequence will occur and when the great
earthquake will occur.
Chapter Sixteen
SELF-ORGANIZED
CRITICALITY
In the last chapter we considered the renormalization group method for treat-
ing large interactive systems. By assuming scale invariance a relatively
small system could be scaled upward to a large interactive system. The ap-
proach is often applicable to systems that have critical point phenomena. In
this chapter we consider the alternative approach to large interactive sys-
tems. This approach is called self-organized criticality. A system is said to be
in a state of self-organized critically if it is maintained near a critical point
(Bak et al., 1988). According to this concept a natural system is in a margin-
ally stable state; when perturbed from this state it will evolve naturally back
to the state of marginal stability. In the critical state there is no longer a nat-
ural length scale so that fractal statistics are applicable.
The simplest physical model for self-organized criticality is a sand pile.
Consider a pile of sand on a circular table. Grains of sand are randomly
dropped on the pile until the slope of the pile reaches the critical angle of re-
pose. This is the maximum slope that a granular material can maintain with-
out additional grains sliding down the slope. One hypothesis for the behavior
of the sand pile would be that individual grains could be added until the
slope is everywhere at an angle of repose. Additional grains would then sim-
ply slide down the slope. This is not what happens. The sand pile never
reaches the hypothetical critical state. As the critical state is approached ad-
ditional sand grains trigger landslides of various sizes. The frequency-size
distribution of landslides is fractal. The sand pile is said to be in a state of
self-organized criticality. On average the number of sand grains added bal-
ances the number that slide down the slope and off the table. But the actual
number of grains on the table fluctuates continuously.
The principles of self-organized criticality are illustrated using a simple
cellular-automata model. As in the previous chapter we again consider a
square grid of n boxes. Particles are added to and lost from the grid using the
following procedure.
SELF-ORGANIZED CRITICALITY 317
(1) A particle is randomly added to one of the boxes. Each box on the
grid is assigned a number and a random-number generator is used to
determine the box to which a particle is added. This is a statistical
model.
(2) When a box has four particles it is unstable and the four particles are
redistributed to the four adjacent boxes. If there is no adjacent box
the particle is lost from the grid. Redistributions from edge boxes re-
sult in the loss of one particle from the grid. Redistributions from the
corner boxes result in the loss of two particles from the grid.
(3) If after a redistribution of particles from a box any of the adjacent
boxes has four or more particles, it is unstable and one or more fur-
ther redistributions must be carried out. Multiple events are common
occurrences for large grids.
(4) The system is in a state of marginal stability. On average, added par-
ticles must be lost from the sides of the grid.
This is a nearest neighbor model. At any one step a box interacts only
with its four immediate neighbors. However, in a multiple event interactions
can spread over a large fraction of the grid.
The behavior of the system is characterized by the statistical fre-
quency-size distribution of events. The size of a multiple event can be quan-
tified in several ways. One measure is the number of boxes that become un-
stable in a multiple event. Another measure is the number of particles lost
from the grid during a multiple event.
When particles are first added to the grid there are no redistributions and
no particles are lost from the grid. Eventually the system reaches a quasi-
equilibrium state. On average the number of particles lost from the edges of
the grid is equal to the number of particles added. Initially, small redistribu-
tion events dominate, but in the quasi-equilibrium state the frequency-size
distribution is fractal. This is the state of self-organized criticality. There is a
strong resemblance to the renormalization group approach considered in the
last chapter. In the renormalization group approach the frequency-size sta-
tistics are fractal only at the critical point. In the cellular automata model the
frequency-size statistics are fractal only in the state of self-organized criti-
cality.
The behavior of a sand pile and the behavior of the cellular automata
model have remarkable similarities to the seismicity associated with an ac-
tive tectonic zone. The addition of particles to the grid is analogous to the
addition of stress caused by the relative displacement between two surface
plates, say, across the San Andreas fault. The multiple events in which parti-
cles are transferred and are lost from the grid are analogous to earthquakes in
which some accumulated stress is transferred and some is lost. There is a
strong similarity between the frequency-magnitude statistics of multiple
events and the Gutenberg-Richter statistics for earthquakes. Before consid-
318 SELF-ORGANIZED CRITICALITY
ering the analogy further, we will describe the behavior of the cellular au-
tomata model in some detail.
As a specific example we consider the 3 x 3 grid illustrated in Figure
16.1. The nine boxes are numbered sequentially from left to right and top to
bottom as illustrated in Figure l6.l(a). The cellular automata model has
been run for some time to establish a state of self-organized criticality. The
further evolution of the model is as follows and is illustrated in Figure
16.1(b).
Step I A particle has been randomly added to box 8. The number of parti-
cles in this box has been increased from two to three.
Step 2 A particle has been randomly added to box 6, increasing the number
of particles from one to two. This addition is illustrated in the
change between steps 1 and 2 in Figure 16.l(b).
Step 3a A particle has been randomly added to box 5, increasing the number
of particles from three to four and making it unstable; the four parti-
cles are redistributed to the four adjacent boxes, increasing the num-
ber of particles in box 2 from three to four, the number of particles in
box 4 from three to four, the number of particles in box 6 from two
to three, and the number of particles in box 8 from three to four.
Boxes 2, 4, and 8 are now unstable. No particles are lost from the
Step 3j The four particles in box 1 are redistributed and two are lost from the
grid. No boxes remain unstable so that the sequence of 10 redistribu-
tions has completed step 3. During step 3 all nine boxes were unsta-
ble and 12 particles were lost from the grid.
Step 4 A particle has been randomly added to box 5, increasing the number
of particles from zero to one.
Step 5 A particle has been randomly added to box 6, increasing the number
of particles from two to three.
:
C9
r(
2
V
bD
0
-e 4
y=2.99-1.03~
correlations with the fractal -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0
relation (2.6) give D = 1.39 Log layer thickness h (m)
in (a) and D = 1.12 in (b). (b)
SELF-ORGANIZED CRITICALITY 323
foreshock) may trigger an instability in a large box (the main shock). A re-
distribution from a large box always triggers many instabilities in the
smaller boxes (aftershocks).
As a specific example we again consider the surface exposure of the frac-
tal fragmentation model given in Figure 3.3. A fifth-order realization of this
construction is given in Figure 16.5. We have N , = 1 box with r , = $, N2 = 3
boxes with r, = $, N , = 9 boxes with r, = N4 = 27 boxes with r4 = h, and
i,
N, = 108 boxes with r, = 2. Except for N, the N are related to the ri by the
fractal relation (2. I) with D = In 3lln 2 = 1.5850.
(1) Particles are added one at a time to randomly selected boxes. The
probability that a particle is added to a box is proportional to the area
A, = r;of the box.
(2) A box becomes unstable when it contains 4A. particles.
(3) Particles are redistributed to immediately acfjacent boxes or are lost
from the grid. The number of particles redistributed to an adjacent
box is proportional to the linear dimension ri of that box.
(4) If, after a redistribution of particles from a box, any of the adjacent
boxes are unstable, one or more further redistributions are carried
out. In any redistribution, the critical number of particles is redis-
tributed. Redistributions are continued until all boxes are stable.
(1994), Rubio and Galeano (1994), Robinson (1994), Espanol (1994), and
Lin and Taylor (1994). McCloskey (1993), and McCloskey and Bean (1994)
considered arrays of slider blocks connected to two driver plates, and these
driver plates were treated as a pair of interacting slider blocks.
The standard two-dimensional array of slider blocks is illustrated in Fig-
ure 16.7. In the cellular-automata approximation it is assumed that during
the sliding of one block, all other blocks are stationary; this requirement lim-
its the system to nearest neighbor interactions, which is characteristic of cel-
lular-automata systems. To minimize the complexity we considered a dis-
continuous static-dynamic friction law. After non-dimensionalization of the
governing equations, the governing parameters are a = kclkl (kc is the spring
constant of the connector springs, k, is the spring constant of the puller
+
springs), a is a measure of the stiffness of the system, = FJF, (the ratio of
the static friction F, to dynamic friction F,), and N the number of blocks
+
considered. In this model the parameter can be eliminated by rescaling.
Thus for large systems (N very large) the only scaling parameter is the stiff-
ness a. Frequency-size statistics for a 50 X 50 (N = 2500) array are given in
Figure 16.8 for several values of the stiffness parameter a . A good correla-
tion is obtained with the fractal relation (2.6) with D = 2.72. The fre-
quency-size relation shows a roll-off from the power law near the larger end
of the scaling region. This deviation is reduced as the parameter a increases.
Frequency-size statistics for several different size arrays are given in Figure
16.9. When the parameter alNlJ2 is greater than one, we observe an excess
number of catastrophic events that include the failure of all blocks. The fail-
ure statistics of these multiple-block systems clearly indicate a self-orga-
nized critical behavior and are remarkably similar to distributed seismicity.
I
- N, = 2.60,2.95, and 3.20
correspond to catastrophic
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 events involving the entire
log(N~) system.
328 SELF-ORGANIZED CRITICALITY
the Loma Prieta earthquake on October 17, 1989. These are illustrated in
Figure 16.10. The TIP issued for region 3 in the Caucasus during January
1987 was still in effect when the Armenian earthquake occurred in this re-
gion on December 7, 1988. TIPS were issued for region 5 in California dur-
ing October 1984 and for region 6 during January 1985. These warnings
were still in effect when the Loma Prieta earthquake occurred within these
overlapping regions on October 17, 1989.
The fault rupture of the Loma Prieta earthquake extended over about 40
km. However, the prediction algorithms detected anomalous seismic behav-
ior over two regions with diameters of 500 km. Self-organizedcriticality can
explain anomalous correlated behavior over large distances.
This approach is certainly not without its critics. Independent studies
have established the validity of the TIP for the Loma Prieta earthquake;
however, the occurrence of recognizable precursory patterns prior to the
Landers earthquake are questionable. Also, the statistical significance of the
size and time intervals of warnings in active seismic areas has been ques-
tioned. Nevertheless, seismic activation prior to a major earthquake cer-
tainly appears to be one of the most promising approaches to earthquake pre-
diction.
44
42
35
'4 - California-Nevada region is
broken up into eight areas
5 with diameters of 500 km.
30 6- Four warnings (for regions
4-6.8) and the locations and
times of four earthquakes are
25
-135 -130 -125 -120 -115 -110 given.
330 SELF-ORGANIZED CRITICALITY
A block slides if lF;,.,l > Fs,where Fsis the prescribed static friction force. To
simplify the analysis and simulations, only one block in the array is updated
during each microscopic time step. Before the update there are two possible
states:
(I) The block was stuck after the previous update IF;, ,In+
< Fs. How-
ever, the forces on the block have changed because of subsequent
updates on neighboring blocks, there are now two possibilities:
(a) The block is still stable, l ~ ~ , ~-, ,<, ,Fs, and the update is termi-
nated.
(b) The block is still unstable, IF, ,In+,
- > Fs. In this case motion of
(2) The block was slipping after the previous update IF;, ,In+
> Fs. But
again the forces on the block have changed because of subsequent
updates on neighboring blocks. There are two possibilities:
(a) The block is now stable, IF;,]^+^- < Fs, and the step is termi-
nated.
(b) The block is still unstable IF;, , I n + , -
> Fs, and then (16.3) is used
to determine the new position of the block and the new net force
,+
on the block (F,,j)n+ is determined. Again there are two possi-
bilities:
(i) If (F;,]~+, < Fs the block remains stuck until the next up-
+
date.
(ii) If IF^, jln+
I > Fs, the block slips until the next update.
+
332 SELF-ORGANIZED CRITICALITY
If IFi]- > 1, block (i, j) is unstable and its nondimensional slip is given by
However, the slip condition for a block is determined by the statistical distri-
bution of forces on the blocks. From (16.4) it is seen that the random force
on a block is the sum of four random forces on the neighbor springs. These
forces are not independent as one can see from (16.4) and the Gaussian dis-
tribution of forces on the block is
A block can slip if [FBI> 1. Using (16.10) the probability that a block will be
slipping Psis
p=0.213
site percolatm I
i Figure 16.13. Number of
clusters ns of size s as a
function of s. The solid line
is the distribution of
percolating clusters for a
2000 X 2000 array with the
critical percolation
probability p = 0.5927. The
dashed line is the distribution
of slipping clusters of blocks
on our 2000 X 2000 array of
slider blocks at the critical
point E = 0.2 13.
tude scale, but also, the level of activity does not vary from year to year (see
Figure 4.3). Earthquakes in this magnitude range strongly resemble the sta-
tistical fluctuations of the slider-block array near its critical point.
Further evidence supporting the applicability of the "percolationw-like
model comes from the spatial distribution of seismicity in southern Califor-
nia. The distributions given in Figure 4.12 appear to correspond to the fractal
dimension of the percolation "backbone" of a critical three-dimensional per-
colation cluster. It appears that the earthquakes on a complex array of faults
form a connected path across the zone of crustal deformation in direct anal-
ogy to the "percolation backbone" of slipping blocks in the array. Rundle et
al. (1995) found that the block energy distribution for a driven slider-block
model is a Maxwell-Boltzmann distribution as the model approaches the
mean field where fluctuations are minimal.
56,36,05, 15,25,68,40,52, 1 8 , 8 1 , 0 3 , 7 9 , 3 5 , 3 5 , 9 5 , 5 6 , 5 9 , 8 0 , 5 1 , 0 7 ,
20,56,86,46, and 30. Trees were already planted on 36,05,79,35,56, and
56. The matches dropped on 25, 81, 95, and 07 did not ignite because there
were no trees on these grid points. The match dropped on 30 ignited this tree
and burned six adjacent trees.
Frequency-size statistics for forest fires can be determined. Two exam-
ples for a 100 X 100 tree forest are given in Figure 16.16. The number of
1
burning clusters N is given as a function of their size A, forf = &j and f = m.
For the larger value f = &
j , fires consume the forest before large clusters can
form. A reasonably good correlation with the fractal relation (2.2) is ob-
tained taking D = 2.00. The roll-off from the power law near the larger end
of the scaling region is very similar to that illustrated for the slider-block
model in Figure 16.8. When the sparking frequency f is reduced to &, we
observe an excess number of catastrophic fires that consume all or nearly all
of the 10,000 trees. Again this is very similar to the behavior found for slider
blocks when the stiffness parameter is large as illustrated in Figure 16.9.
N
io3
1'
0
Figure 16.16. The number of
forest fires N with size Af
(Af is the number of trees that
bum in a fire) is given as a 10'
function of A, Results are for
a 100 X 100~orest-grid with
sparking parameters f = &j 10
o +-I
and A. 1o0
Problems
Use the random numbers to assign particles to boxes and carry out the
cellular automata model described in this chapter.
Problem 16.5.
Consider the linear grid of four boxes illustrated above. Use the se-
quence of random numbers given in Problem 16.4 to assign particles to
the four boxes. Use the following rules: When a box has two particles it
is unstable and they are redistributed to the two adjacent boxes. If either
of these boxes has two elements, they are again redistributed. Particles
are lost from the ends of the linear grid.
Problem 16.6. Consider the evolution of the forest-fire model illustrated in
Figure 16.15. Consider the configuration given in (d) and determine its
subsequent evolution using the random number sequence 96,09,35,67,
-13, 33, 94,44, 66, 37. (a) How many trees are planted? (b) How many
forest fires occur and how many trees are burned in them?
Problem 16.7. Consider the evolution of the forest-fire model illustrated in
Figure 16.15. Consider the configuration given in (d) and determine its
subsequent evolution using the random number sequence 15,81,55,25,
53, 65, 29, 17, 7 3 , s . (a) How many trees are planted? (b) How many
-
forest fires occur and how many trees are burned in them?
Problem 16.8. Consider a linear (one-dimensional) forest-fire model using a
grid of 10 points numbered sequentially from 0 to 9. Consider p = 4 so
that after three trees are planted on random points, a match is dropped on
a random point. Assume initially that trees are planted on points 1,3, and
5 and consider the random sequence 0, 1,7,2,3,2,6,4,0,7,7,4,9,4,7,
6. (a) Which points have trees after these 16 time steps? (b) How many
-
forest fires occurred and how many trees burned in each fire?
Chapter Seventeen
WHERE DO WE
STAND?
bulence, and between the Rikitate dynamo and the generation of the earth's
magnetic field.
The introduction of models that exhibit self-organized criticality has
been a major advance in extending concepts of chaos to higher-order sys-
tems. In this regard slider-block models play a central role. Slider-block
models were introduced as simple analog models for earthquakes. Distrib-
uted seismicity is taken to be a type example of self-organized critical be-
havior. The behavior of slider-block models is deterministic. Two slider
blocks exhibit the classical chaotic behavior of a low-order system. Large
numbers of slider blocks are self-organized critical. By systematically in-
creasing the number of blocks, the transition from chaotic to self-organized
critical behavior can be studied.
One of the present frontiers of research is to examine the relationship be-
tween models that exhibit self-organized criticality and the basic aspects of
statistical mechanics. For example, can earthquakes be better understood in
terms of the statistical fluctuations of a quasi-equilibrium system? Another
recent development is the recognition that complex fractal dimensions lead
to log-periodic behavior. It has been suggested that log-periodic behavior
may lead to a viable earthquake-prediction strategy.
Some would argue that the concepts covered in this book fall under the
broad umbrella of "complexity." But complexity is so broad a term that it de-
fies any all-encompassing definition. Certainly, many aspects of geology
and geophysics are complex; just as many problems in biology, economics,
and human behavior are complex. There are also links between important
problems in all these areas. This has led a number of scientists to propose a
new science of complexity. The science would include fractals, chaos, and
self-organized criticality. This is a major feature of the activities at the Santa
Fe Institute. But there has also been a strong reaction against "complexity"
with regard to its generality and a failure to deliver on promises made by
some of its practitioners. The entire area of fractals, chaos, self-organized
criticality, and complexity remains extremely active, and it is impossible to
predict with certainly what the future holds.
REFERENCES
Klacka, J. (1992). Mass distribution in the asteroid belt, Earth Moon Planets
56,47-52.
Klement, S., Kratky, K. W. & Nittmann J. (1993). Practical time-series
analysis with multifractal methods, Fractals 1,735-43.
Knopoff, L., Landoni, J. A. & Abinante, M. S. (1993). Dynamic model of an
earthquake fault with localization, Phys. Rev. A46,7445-9.
Kondev, J. & Henley, C. L. (1995). Geometrical exponents of contour loops
on random Gaussian surfaces, Phys. Rev. Lett. 74,4580-3.
Korcak, J. (1940). Deux types fondamentaux de distribution statistique, Bull.
Insti. Int. Stat. 30,295-9.
Korvin, G. (1992). Fractal Models in the Earth Sciences, Elsevier, Amsterdam,
381 pp.
Kramer, S. & Marder, M. (1992). Evolution of river networks, Phys. Rev.
Lett. 68,205-8.
Krause, F. & Schmidt, J. J. (1988). A low-dimensional attractor for model-
ling the reversals of the earth's magnetic field, Phys. Earth Planet. Int.
52,23-9.
Krohn, C. E. (1988a). Fractal measurements of sandstones, shales, and car-
bonates, J. Geophys. Res. 93,3297-305.
Krohn, C. E. (1988b). Sandstone fractal and Euclidean pore volume distribu-
tions, J. Geophys. Res. 93,3286-96.
Krohn, C. E. & Thompson, A. H. (1986). Fractal sandstone pores: Auto-
mated measurements using scanning-electron microscope images, Phys.
Rev. B33,6366-74.
Kucinskas, A. B. & Turcotte, D. L. (1994). Isostatic compensation of equato-
rial highlands on Venus, Icarus, 112, 104-16.
Kucinskas, A. B., Turcotte, D. L., Huang, J. & Ford, P. G. (1992). Fractal analy-
sis of Venus topography in Tinatin Planitia and Ovda Regio, J. Geophys.
Res. 97, 13,635-4 1.
Langer, J. S. & Tang, C. (1991). Rupture propagation in a model of an earth-
quake fault, Phys. Rev. Lett. 67, 1043-6.
La Pointe, P. R. (1988). A method to characterize fracture density and connec-
tivity through fractal geometry, Int. J. Rock Mech. Min. Sci. 25,421-9.
La Pointe, P. R. (1995). Estimation of undiscovered hydrocarbon potential
through fractal geometry, in Fractals in Petroleum Geology and Earth
Process, C . C . Barton and P. R. La Pointe, eds., pp. 35-37, Plenum
Press, New York.
Lasky, S. G. (1950). How tonnage and grade relations help predict ore re-
serves, Eng. Mining J. 151,8 1-5.
Laverty, M. (1987). Fractals in karst, Earth Surface Proc. Landfomzs 12,475-80.
Leary, P. & Abercrombie, R. (1994). Fractal fracture scattering origin of
S-wave coda: Spectral evidence from recordings at 2.5 km, Geophys.
Res. Lett. 21, 1683-6.
REFERENCES 357
Ledesen, B., Dubois, J., Genter, A. & Meunier, A. (1993). Fractal analysis of
fractures applied to Soultz-sous-Forets hot dry rock geothermal pro-
gram, J. Volcan. Geotherm. Res. 57, 1-17.
Lee, H. K. & Schwarcz, H. P. (1995). Fractal clustering of fault activity in
California, Geology 23,377-80.
Leheny, R. L. & Nagel, S. R. (1993). Model for the evolution of river net-
works, Phys. Rev. Lett. 71, 1470-3.
Leopold, L. B. & Langbein, W. B. (1962). The concept of entropy in land-
scape evolution, U.S. Geological Survey Paper 500A, 20 pp.
Lifton, N. A. & Chase, C. G. (1992). Tectonic, climatic and lithologic influ-
ences on landscape fractal dimension and hypsometry: Implications for
landscape evolution in the San Gabriel Mountains, California, Geo-
morph. 5,77-114.
Lin, B. & Taylor, P. L. (1994). Model of spatiotemporal dynamics of stick-
slip motion, Phys. Rev. E49,3940-7.
Lin, B . & Yang, Z. R. (1986). A suggested lacunarity expression for Sierpin-
ski carpets, J. Phys. A19, L49-52.
Little, S. A. (1994). Wavelet analysis of seafloor bathymetry: An example, in
Wavelets in Geophysics, E. Foufoula-Georgiou & P. Kumar, eds., pp.
167-82, Academic Press, New York.
Little, S. A., Carter, P. H. & Smith, D. K. (1993). Wavelet analysis of
bathymetric profile reveals anomalous crust, Geophys. Res. Lett. 20,
1915-8.
Liu, T. (1992). Fractal structure and properties of stream networks, Water
Resoul: Res. 28,298 1-8.
Lorenz, E. N. (1963). Deterministic nonperiodic flow, J. Atmos. Sci. 20,
130-41.
Lou, M. & Rial, J. A. (1995). Applications of the wavelet transform in de-
tecting multiple events of microearthquake seismograms, Geophys. Res.
Lett. 22,2 199-202.
Lovejoy, S., Lavallee, D., Schertzer, D. & Ladoy, P. (1995). The 1'12 law and
multifractal topography: Theory and analysis, Nonlinear Proc. Geophys.
2, 16-22.
Lu, Y. N., Liu, W. S. & Ding, E. J. (1994). Hysteresis in a theoretical spring-
block model, Phys. Rev. Lett. 72,4005-8.
Madden, T. R. (1983). Microcrack connectivity in rocks: A renormalization
group approach to the critical phenomena of conduction and failure in
crystalline rocks, J. Geophys. Res. 88, 585-92.
Magde, L. S., Dick, H. J. B. & Hart, S. R. (1995). Tectonics, alteration and
the fractal distribution of hydrothermal veins in the lower ocean crust,
Earth Planet Sci. Lett. 129, 103-19.
Main, I., Peacock, S. & Meredith, P. G. (1990). Scattering attenuation and the
fractal geometry of fracture systems, Pure Appl. Geophys. 133,283-304.
35 8 REFERENCES
Sammis, C., King, G. & Biegel, R. (1987). The kinematics of gouge defor-
mation, Pure Appl. Geophys. 125,777-8 12.
Sammis, C. G., Osborne, R. H., Anderson, J. L., Banerdt, M. & White, P.
(1986). Self-similar cataclasis in the formation of fault gouge, Pure
Appl. Geophys. 123,53-78.
Sammis, C. G. & Steacy, S. J. (1995). Fractal fragmentation in crustal shear
zones, in Fractals in the Earth Sciences, C. Barton & P. R. La Pointe,
eds. pp. 179-204, Plenum Press, New York.
Saucier, A. (1992). Scaling of the effective permeability in multifractal
porous media, Physica A191,289-94.
Saucier, A. & Muller, J. (1993). Use of multifractal analysis in the character-
ization of geological formations, Fractals 1 , 6 17-28.
Scheidegger, A. E. (1967). A stochastic pattern for drainage patterns into an
intermontane trench, Int. Assoc. Sci. Hydrol. Bull. 12, 15-20.
Scheidegger, A. E. (1991). Theoretical Geomorphology, 3rd ed., Springer-
Verlag, Berlin, 434 pp.
Schenck, H. (1963). Simulation of the evolution of drainage-basin networks
with a digital computer, J. Geophys. Res. 68,5739-45.
Schiff, S . J. (1992). Resolving time-series structure with a controlled wave-
let transform, Optical Eng. 31(11), 2492-5.
Schmittbuhl, J., Vilotte, J. P. & Roux, S. (1993). Propagative macrodisloca-
tion modes in an earthquake fault model, Europhys. Lett. 21,375-80.
Schoeny, P. A. & Saunders, J. A. (1993). Natural gold dendrites from hy-
drothermal Au-Ag deposits: Characteristics and computer simulation,
Fractals 1,585-93.
Scholz, C. H. (1977). A physical interpretation of the Haicheng earthquake
prediction, Nature 267, 121-4.
Scholz, C. H. (1991). Earthquakes and faulting: Self-organized critical phe-
nomena with a characteristic dimension, in Spontaneous Formation of
Space-Time Structures and Criticality, T. Riste & D. Sherrington, eds.,
pp. 41-56, Kluwer Academic Publishers, Netherlands.
Scholz, C. H. & Cowie, P. A. (1990). Determination of total strain from
faulting using slip measurements, Nature 346,837-9.
Scholz, C. H., Dawers, N. H., Yu, J. Z. & Anders, M. H. (1993). Fault growth
and fault scaling laws: Preliminary results. J. Geophys. Res. 98, 21,
951-61.
Schoutens, J. E. (1979). Empirical analysis of nuclear and high-explosive
cratering and ejecta, in Nuclear Geoplosics Sourcebook, Vol. 55, part 2,
section 4, Rep. DNA OIH-4-2, Def. Nuclear Agency, Bethesda, MD.
Schreider, S. Yu (1990). Formal definition of premonitory seismic quies-
cence, Phys. Earth Planet. Int. 61, 113-27.
Schuster, H. G. (1995). Deterministic Chaos, 3rd ed., VCH, Weinheim, 291 pp.
REFERENCES 365
Velde, B., Moore, D., Badri, A. & Ledesert, B. (1993). Fractal and length
analysis of fractures during brittle to ductile changes, J. Geophys. Res.
98,11,935-40.
Verhulst, F. (1990). Nonlinear Differential Equations and Dynamical Sys-
tems, Springer-Verlag, Berlin, 277 pp.
Vicsek, T. (1992). Fractal Growth Phenomena, 2nd ed., World Scientific,
Singapore, 488 pp.
Villemin, T., Angelier, J. & Sunwoo, C. (1995). Fractal distribution of fault
length and offsets: Implications of brittle deformation evaluation - the
Lorraine coal basin, in Fractals in the Earth Sciences, C . Barton & P. R.
La Pointe, eds., pp. 205-26, Plenum Press, New York.
von Seggern, D., Alexander, S. S. & Baag, C. E. (1981). Seismicity parame-
ters preceding moderate to major earthquakes, J. Geophys. Res. 86,
9325-5 1 .
Voss, R. F. (1985a). Random fractals: Characterization and measurement, in
Scaling Phenomena in Disordered Systems, R. Pynn & A. Skejeltorp,
eds., pp. 1-1 1, Plenum Press, New York.
Voss, R. F. (1985b). Random fractal forgeries, in Fundamental Algorithms
for Computer Graphics, NATO AS1 Series, Vol. F17, R. A. Earnshaw,
ed., pp. 805-35, Springer-Verlag, Berlin.
Voss, R. F. (1988). Fractals in nature: From characterization to simulation, in
The Science of Fractal Images, H . 0 . Peitgen & D. Saupe, eds., pp.
2 1-70, Springer-Verlag, New York.
Wallace, R. E. (1977). Profiles and ages of young fault scarps, north-central
Nevada, Geol. Soc. Am. Bull. 88, 1267-81.
Walsh, J. J. & Watterson, J. (1988). Analysis of the relationship between dis-
placements and dimensions of faults, J. Struct. Geol. 10,329-47.
Walsh, J., Watterson, J. & Yielding, G. (1991). The importance of small-
scale faulting in regional extension, Nature 351,391-3.
Watanabe, K. & Takahashi, H. (1995). Fractal geometry characterization of
geothermal reservoir fracture networks, J. Geophys. Res. 100,521-8.
Weitz, D. A. & Oliveria, M. (1984). Fractal structures formed by kinetic ag-
gregation of aqueous gold colloids, Phys. Rev. Lett. 52, 1433-6.
Willgoose, G., Bras, R. L. & Rodriquez-Iturbe, I. (1991). A coupled channel
network growth and hillslope evolution model, I: Theory, Water Resour.
Res. 27, 1671-84.
Wilson, K. G. & Kogut, J. (1974). The renormalization group and the E ex-
pansion, Phys. Rev. C12,75-200.
Witten, T. & Sander, L. M. (1981). Diffusion-limited aggregation, a kinetic
critical phenomenon, Phys. Rev. Lett. 47, 1400-3.
Wu, B. & Lye, L. M. (1994). Identification of temporal scaling behavior of
flood: A study of fractals, Fractals 2,283-6.
370 REFERENCES
GLOSSARY
OFTERMS
ATTR ACTOR A point in phase space toward which a time history evolves
as transients die out.
BAS IN O F ATT R ACT1 ON Some dynamical systems have more than
one fixed point. The region in phase space in which solutions approach a
particular fixed point is known as the basin of attraction of that fixed
point. The boundaries of a basin of attraction are often fractal.
B 1 FUR C AT I 0 N A change in the dynamical behavior of a system when a
parameter is varied.
B R 0 W N I A N WALK The running sum of a sequence of random values
usually obtained from a normal distribution.
C A N TOR D U S T A fractal set generated by subdividing a line into parts.
C H A O S Solutions to deterministic equations are chaotic if adjacent solu-
tions diverge exponentially in phase space; this requires a positive Lya-
punov exponent.
C L U S T E R A group of particles with nearest-neighbor links to other parti-
cles in the cluster.
D E T E R M I N IS T I C A dynamical system whose equations and initial con-
ditions are fully specified and are not stochastic or random.
D I F F E R E N C E E Q U AT I 0 N An equation that relates a value of a func-
tion x,, to a previous value x,,. A difference equation generates a dis-
crete set of values of the function x.
D I F F U S I O N - L I M I T E D A G G R E G A T I O N ( D L A ) Diffusing (random-
walking) particles accrete to a seed particle to form a dendritic structure.
D I M E N S I O N The usual definition of dimension is the topological di-
mension. The dimension of a point is zero, of a line is one, of a square is
two, of a cube is three. In this book we have introduced the concept of
fractional (noninteger) dimensions, or fractals.
F E I G EN B A U M C 0 N S TA NT The ratios of successive differences be-
tween period-doubling bifurcation parameters approach this number
(F = 4.699 202).
372 A P P E N D I X A: GLOSSARY OF T E R M S
UNITS AND
SYMBOLS
Table B 1. SI units
Basic units
Length meter m
Time second s
Mass kilogram kg
Temperature kelvin K
Electric current ampere A
Derived units
Force newton N
Energy joule J
Power watt W
Pressure pascal Pa
Frequency hertz Hz
Charge coulomb C
Electric potential volt V
Magnetic field tesla T
Multiples of 10
lo-' milli
10-6 micro
lo-' nano
10-l2 pic0
I 0' kilo
1O6 mega
I o9 giga
10l2 tera
APPENDIX B: U N I T S A N D S Y M B O L S
Equation
Symbol Quantity introduced SI units
Power (3.40)
Parameter (10.1)
Radius of earth (7.63) m
Frequency of earthquakes (4.1) s-I
Area (3.67) m2
Lorenz variable (12.23)
b-value for earthquakes (4.1)
Lorenz variable (12.24)
Jl/?
Benioff strain (15.38)
Constant (4.4)
Specific heat at constant pressure (12.4) J kg-' K
Coefficient of variation (3.33)
Constant (2.1)
Concentration (5.1)
Pair correlation distribution (6.13)
Lorenz variable (12.24)
Moment of inertia (14.3) kg m2
Constant (4.4)
Euclidean dimension (6.12)
Fractal dimension (2.1)
Energy (4.2) J
Probability (3.5)
Probability distribution function (3.12)
Probability of fragmentation (3.70)
Fraction (6.22)
-- I
Frequency (7.41)
Cumulative distribution function (3.10)
Flood frequency factor (8.31)
Feigenbaum constant (10.11)
Force (11.1) N
Wavelet filter (8.32)
Acceleration due to gravity (12.3) ms- ?
Applied torque (14.2) Nm
Elevation (7.28) m
Layer thickness (12.5) m
Hausdorff measure (7.1)
Hurst exponent (7.56)
Electrical current (14.1) A
Transport coefficient (8.25) m2s-I
Boltzmann constant (3.16) JK-'
Constant (3.40)
Wave number (7.66) m-I
Spring constant (9.12) Nm-I
Thermal conductivity (12.4) Wm-I K-'
Partition coefficient (5.29)
Rikitake parameter (14.13)
Length (2.9) m
Lacunarity (6.20)
Self-inductance (14.3) VsA-1
(continued)
376 A P P E N D I X B: U N I T S A N D S Y M B O L S
Equation
Symbol Quantity introduced SI units
Mass
Earthquake magnitude
Mass
Earthquake moment
Moment
Mutual inductance
Number
Number
Nusselt number
Number per unit time
Pressure
Probability
Perimeter
Prandtl number
Power
Volumetric flow
Linear dimension
Autocorrelation function
Radius
Ratio of Rayleigh numbers
Rate
Range
Resistance
Rayleigh number
Bifurcation ratio
Length-order ratio
Information entropy
Entropy
Power spectral density
Standard deviation
Time
Temperature
Time interval
Branching ratio
Variable
Horizontal velocity coordinate
Velocity
Vertical velocity
Volume
Variance
Wavelet transform
Variable
Horizontal coordinate m
(continued )
APPENDIX B: U N I T S A N D SYMBOLS 377
Equation
Symbol Quantity introduced SI units
Y Variable (3.18)
Vertical coordinate (2.7) m
Y Fourier transform (7.38)
Nondimensional position (11.3)
a Constant (4.5)
Mass ratio (5.13)
Stiffness parameter (11.12)
Volume coefficient of thermal
expansion (12.8) K-'
P Power (7.41)
Constant (9.12)
Symmetry parameter (11.12)
B Constant (4.6)
Y Coefficient of skew (3.4)
Semivariance (7.8)
I- Slope (10.6)
S Displacement across fault (4.3) m
E Strain (4.27)
Small quantity (5.39)
Parameter (9.13)
0 Latitude (7.63)
Polar coordinate (9.28)
Temperature difference (12.7) K
K Thermal diffusivity (12.11) m2 s-I
A Wavelength (7.66) m
Lyapunovexponent (10.19)
CL Shear modulus (4.3) Pa
Viscosity (12.2) Pa s
Rikitake parameter (14.8)
v Power (3.48)
Hazard rate (15.30) t-I
P Density (3.82) kg m-'
Polar coordinate (9.28)
u Standard deviation (3.3)
Stress (15.44) Pa
T Time interval (4.23) s
Nondimensional time (11.3)
dJ Porosity (3.81)
Enrichment factor (5.1)
Longitude (7.63)
Friction parameter (1 1.3)
$ Stream function (12.6) m2s-I
a Angular velocity (14.1) s-I
ANSWERS
TO SELECTED
PROBLEMS
1.57.
1.26.
113.
213.
(a) (8 - 4,)Cd7. (b) (8 - $8)+8Cd7' ( c ) 1 < < 8.
(d)O < D < 3.
107 kg.
3.18 X 108 kg.
88 kg.
4.85 X 106 kg.
8.37 X 1010 kg.
3.9 X 107 kg.
2.55.
0.5.
1, 113, 119, 1/27.
1, 1, 1, 1.
1,315,9125.
1,417, 16149,641343.
1,317,9149,271343.
1, 113, 119, 1127.
1, 112, 114, 118.
1, 17125,2891625.
1, 618, 36164, 2161512.
1,26127, 6761729.
c (112) = 4, C (1) = 2.
C (1) = 6.
C (1) = 24, C ( f i ) = 2 4 1 f i , C ( f i ) = 8 1 f i .
ANSWERS TO SELECTED PROBLEMS 381
382 ANSWERS TO SELECTED PROBLEMS
Rayleigh number, 258, 260, 264, 268, 269, 271. scale invariant distribution, 3
272,273 scaling, counter, 162
critical, 260, 264 multifractal, 128
rebound, elastic, 245, 246 scarpes, earthquake, 203
recursion relations. 3, 293 shoreline, 203
recursive map, 243,286 screen analysis, 42
regional seismicity, 5, 63, 324 sea-floor bathymetry, 2, 163, 165, 167, 168, 217
remanent magnetism, 279 sea level, 19, I67
renormalization, 47.83.89.289. 373 second law of thermodynamics, 289
renormalization group method, 5,289,290,292, sedimentary basement, 76
295, 299,316,317 sedimentary basin, 76
applied to fault rupture, 299 sedimentary bedding planes, 19, 321
applied to fragmentation, 295 sedimentary completeness, 24
repose, angle of, 3 16 sedimentary hiatuses, 18, 19.20, 167
rescaled range analysis, 138, 158, 161, 162,373 sedimentary pile, 25, 76
of fractional Gaussian noises, 161, 162 sedimentary record, 18, 19
reserves, ore, 8 1 gaps in, 18, 19.20, 167
petroleum, 96 sedimentary sequence, 22.23, 321
reservoir, oil, 3.96.97 fractal dimension of, 22,321
reservoir storage, 158 power-law correlation of, 23
resistance, 28 1 thickness statistics of, 321
reversals, magnetic field, 3,279 sedimentary unconformities, 18, 19,20,2 1
ridge, ocean, 13.56.70.269 sedimentation, episodic, 23
ridge push, 65 sediments, age distribution of, 18
Rikitaki dynamo, 3,279,280,282, 342 deposition of, 18.22.202. 321
rings, tree, 160 erosion of, 18, 19.23.202
river deltas, 193 subsidence of, 76
river discharge, 37, 136, 158, 160, 215 seismic activation, 329, 330
time series of, 158,215 seismic hazards, 63,66,254,307,328,329,342
river meanders, 195 seismic moment, 60, 305
river networks, 2, 5, 18 1, 185, 19 1, 194, 199, 207 seismic precursors, 305, 328
river sinuosity, 195 seismic quiescence, 328
rivers, braided, 207 seismicity, 1.4, 15, 39.56, 57.58, 59, 61, 65, 74,
multifractal analysis of, 129 103, 107, 128, 145, 317,320, 324,
order of, 18 1 325,326,327,328.329,330,341
rock fragments, 1, 15, 28.42 clustering of, 103, 328
size distribution of, 1, 15.42 crustal, 322
rock surfaces, 166 distributed, 1, 76, 107, 128, 320, 324, 325,
rocky coastlines, 1, 12, 15, 132 326,327
length of, 1, 12 fractal dimension, of 59, 76
rod. measuring. 1, 12 global, 59
Rosin and Rammler distribution, 40.42 induced, 5, 129,328
rotational period, 136 Memphis-St. Louis. 66
roughness, 2, 15, 165, 176 pair correlations of, 107
of coastline, 15 regional, 5.63.324
of topography, 2, 165, 176 southern California, 61.63.330, 336
roughness-range method, 162 seismograms, 166,217
route to chaos, 237 self affine. 373
ruler method, 12, 13, 14 self-affine fractal, 132. 135, 138, 140, 145, 146,
rupture, fault, 299 148, 150, 163, 166,372
rupture area, earthquake, 39.58 box counting method for, 132, 135, 147
Brownian walk as a, 140
saddle point, 225, 373 deterministic, 133
St. Louis earthquakes, 66 fractional Brownian walk as a, 140, 146, 150
San Andreas fault, 56,62,65,67, 104 fracture surfaces as a, 166
San Fernando earthquake, 56.330 sea level as a. 167
San Francisco earthquake, 56.57.330 statistical, 140
San Gabriel fault, 104 topography as a, 145, 178
sand pile model, 5.3 16.3 17 variance of, 146
sand piles, 316, 317, 321 self-affine fractal dimension, 135
scale invariance, 1, 11, 39, 84, 91, 289, 304, 3 16. self-affine tiling, 207
372,373 self-affine time series, 145
INDEX
Ill"
a
'