You are on page 1of 6

Design and Analysis of Experiments D.C.

Montgomery
Introduction
Strategy of experimentation
Observations on a system or process can lead to theories or hypotheses
about what makes the system work, but experiments are required to
demonstrate that these theories are correct.
An experiment can be defined as a test or series of runs in which
purposeful changes are made to the input variables of a process or system
so that we may observe and identify the reasons for changes that may be
observed in the output response. Well-designed experiments can often lead
to a model of system performance which is called empirical model.
The objective of experimentation is to develop a robust system or process
which is affected minimally by external sources of variability.
A process or system can be visualized as a combination of operations,
machines, methods, people, and other resources that transforms some
y that has one or more observable response
inputs into an output
variables. Some of the process and system variables
controllable, whereas other variables

z1 , z2 , , z p

x1 , x2 , , x p

are uncontrollable.

The objectives of the experiment may include the following:

are

1) Determining which variables are most influential on the response y


2) Determining where to set the influential x s so that y is almost
always near the desired nominal value
3) Determining where to set the influential
y

x s so that the variability in

is small

4) Determining where to set the influential


uncontrollable variables

x s so that the effects of the

z 1 , z 2 , , z p are minimized

The general approach to planning and conducting the experiment is called


the strategy of experimentation, and there are many different strategies
the experimenter can use.
The best-guess approach is frequently used in practice by engineers and
scientists, and consists of switching one or two factors in each test based on
the results of the previous test. However, it has at least two disadvantages:

If the initial best-guess does not produce desired results, it is not easy
to decide on the factor changes for the next guess, thus the process
taking a long time to converge.
In case the initial best-guess does produce acceptable results, the
experimenter might be tempted to stop testing without there being
guarantee that the best solution has been found.

The one-factor-at-a-time (OFAT) approach is also extensively used in


practice and consists of selecting a starting point, or baseline set of levels,
for each factor, and then successively varying each factor over its range with
the other factors held constant at the baseline level. A series of graphs are
produced:

The major disadvantage of the OFAT strategy is that it fails to consider any
possible interaction between the factors, that is, the failure of one factor to

produce the same effect on the response at different levels of another factor.
Interaction between factors are very common, and if they occur, the OFAT
strategy will usually produce poor results.

The correct approach to dealing with several factors is to conduct a factorial


experiment, in which factors are varied together instead of one at a time.
For two factors at two levels, the factorial experiment is called a 22 factorial
design, and requires a series of four runs. Example (each factor combination
is run twice):

Driver effect=

92+ 94+ 93+91 88+91+88+ 90

=3.25
4
4

Ball effect=

88+9 1+9 2+9 4 88+9 0+93+ 9 1

=0 .7 5
4
4

Interaction effect=

92+94 +88+ 90 88+ 91+93+9 1

=0 .25
4
4

Statistical testing could be used to determine whether any of these effects


actually differ from zero, and it turns out that there is reasonably strong
statistical evidence that the driver effect differs from zero and the other two
effects do not.
One very important feature of the factorial experiment is that factorials make
the most efficient use of the experimental data.
This experiment concept can be extended to a higher number of factors, e.g.
three or four, thus requiring a higher factorial design, e.g. 2 3 factorial design
or 24 factorial design respectively.

A 2k factorial design requires at least 2k test runs to cover all the cases,
which might become unfeasible at some point. For these cases, the
fractional factorial experiment is a variation of the basic factorial design
in which only a subset of the runs is used. If selected appropriately, the
subset allows for capturing good information about the main effects of the k
factors as well as some information about how these factors interact.

Some typical applications of experimental design


Experimental design methods are of fundamental importance in engineering
design activities, where new products are developed and existing ones
improved. Some applications of experimental design in engineering design
include

Evaluation and comparison of basic design configurations


Evaluation of material alternatives
Selection of design parameters so that the product will work well under
a wide variety of field conditions, that is, so that the product is robust
Determination of key product design parameters that impact product
performance
Formulation of new products

Basic principles
Statistical design of experiments refers to the process of planning the
experiment so that appropriate data will be collected and analyzed by
statistical methods, resulting in valid and objective conclusions. There are
two parts in the process: the design of the experiment and the statistical
analysis of the data.
There are three basic principles of experimental design: randomization,
replication and blocking; and sometimes the factorial principle is also
added to these three.

Randomization allows for the observations to be independently


distributed random variables, which also assists in averaging out the
effects of extraneous factors that might be present. There are
statistical methods for dealing with restrictions on randomization.
Replication implies independent repeat run of each factor combination.
It allows the experimenter to obtain an estimate of the experimental
error, which becomes a basic unit of measurement for determining

whether observed differences in the data are really statistically


different. It also permits to obtain a more precise estimate of the true
mean response for one of the factor levels in the experiment.
Blocking is a design technique used to reduce or eliminate the
variability transmitted from nuisance factors, that is, factors that may
influence the experimental response but in which we are not directly
interested. Generally, a block is a set of relatively homogeneous
experimental conditions.

Guidelines for designing experiments

You might also like