Professional Documents
Culture Documents
Quality
It is a relative word. It lies in the eyes of the perceiver. According to ISO 9000:2000, it is
defined as the degree to which a set of inherent characteristics fulfills the requirements.
Q = P/E where P is performance and E is expectations.
The extent to which a product or service successfully meets the expectations etc is illustrated
in the diagram below. Quality is an expression of the gap between the standard expected and
the standard provided. When the two coincide (there is no gap) you have reached good or
satisfactory quality, when there is a gap there is cause for dissatisfaction and opportunity for
improvement.
Quality Improvement
Quality Improvement is a formal approach to the analysis of performance and systematic
efforts to improve it. The ISO definition of quality improvement states that it is the actions
taken throughout the organization to increase the effectiveness of activities and processes to
provide added benefits to both the organization and its customers. In simple terms, quality
improvement is anything which causes a beneficial change in quality performance. There are
two basic ways of bringing about improvement in quality performance. One is by better
control and the other by raising standards. We don't have suitable words to define these two
concepts. Doing better what you already do is improvement but so is doing something new.
Juran uses the term control for maintaining standards and the term breakthrough for
achieving new standards. Imai uses the term Improvement when change is gradual and
Innovation when it is radical. Hammer uses the term Reengineering for the radical changes.
All beneficial change results in improvement whether gradual or radical so we really need a
word which means gradual change or incremental change. The Japanese have the word
Kaizen which means continuous improvement.
Quality control:
Quality control is the combination of all the devices and techniques that are used to control
product quality at the most economical costs which yield adequate customer satisfaction
Dimensions of Quality
What are the dimensions of quality?
Before we discuss on dimensions of quality, we must discuss three aspects associated with
definition of quality: quality of design, quality of conformance, and quality of performance.
Quality of design is all about set conditions that the product or service must minimally have
to satisfy the requirements of the customer. Thus, the product or service must be designed in
such a way so as to meet at least minimally the needs of the consumer. However, the design
must be simple and also less expensive so as to meet the customers' product or service
expectations. Quality of design is influenced by many factors, such as product type, cost,
profit policy, demand of the product, availability of parts and materials, and product
reliability.
Quality of conformance is basically meeting the standards defined in the design phase after
the product is manufactured or while the service is delivered. This phase is also concerned
about quality is control starting from raw material to the finished product. Three broad
aspects are covered in this definition, viz. defect detection, defect root cause analysis , and
defect prevention. Defect prevention deals with the means to deter the occurrence of defects
and is usually achieved using statistical process control techniques. Detecting defects may be
by inspection, testing or statistical data analysis collected from process . Subsequently, the
root causes behind the presence of defects are investigated, and finally corrective actions are
taken to prevent recurrence of the defect.
Quality of performance is how well the product functions or service performs when put to
use. It measures the degree to which the product or Service satisfies the customer from the
perspective of both quality of design and the quality of conformance. Meeting customer
expectation is the focus when we talk about quality of performance.
There are eight such dimensions of quality. These are:
1. Performance:
It involves the various operating characteristics of the product. For a television set, for
example, these characteristics will be the quality of the picture, sound and longevity of the
picture tube.
2. Features:
These are characteristics that are supplemental to the basic operating characteristics. In an
automobile, for example, a stereo CD player would be an additional feature.
3. Reliability:
Reliability of a product is the degree of dependability and trustworthiness of the benefit of the
product for a long period of time.
It addresses the probability that the product will work without interruption or breaking down.
4. Conformance:
It is the degree to which the product conforms to pre- established specifications. All quality
products are expected to precisely meet the set standards.
5. Durability:
It measures the length of time that a product performs before a replacement becomes
necessary. The durability of home appliances such as a washing machine can range from 10
to 15 years.
6. Serviceability:
Serviceability refers to the promptness, courtesy, proficiency and ease in repair when the
product breaks down and is sent for repairs.
7. Aesthetics:
Aesthetic aspect of a product is comparatively subjective in nature and refers to its impact on
the human senses such as how it looks, feels, sounds, tastes and so on, depending upon the
type of product. Automobile companies make sure that in addition to functional quality, the
automobiles are also artistically attractive.
8. Perceived quality:
An equally important dimension of quality is the perception of the quality of the product in
the mind of the consumer. Honda cars, Sony Walkman and Rolex watches are perceived to be
high quality items by the consumers.
Year Event
1960 The concept of quality control circle was introduced in japan by K. Ishikawa.
1970-1980 Oil shock and further development of Company Wide Quality Control
The concepts of Statistical Process Control (SPC) were initially developed by Dr. Walter
Shewhart of Bell Laboratories in the 1920's, and were expanded upon by Dr. W. Edwards
Deming, who introduced SPC to Japanese industry after WWII. After early successful
adoption by Japanese firms, Statistical Process Control has now been incorporated by
organizations around the world as a primary tool to improve product quality by reducing
process variation.
Dr. Shewhart identified two sources of process variation: Chance variation that is inherent in
process, and stable over time, and Assignable, or Uncontrolled variation, which is unstable
over time - the result of specific events outside the system. Dr. Deming relabeled chance
variation as Common Cause variation, and assignable variation as Special Cause variation.
Based on experience with many types of process data, and supported by the laws of statistics
and probability, Dr. Shewhart devised control charts used to plot data over time and identify
both Common Cause variation and Special Cause variation.
Acceptance sampling
Acceptance sampling is a major component of quality control and is useful when the cost of
testing is high compared to the cost of passing a defective item or when testing is destructive.
It is a compromise between doing 100% inspection and no inspection at all.
There are two types:
1. Outgoing inspection - follows production
2. Incoming inspection - before use in production
12. Remove barriers that rob the hourly worker of right to pride in work
Remove physical and mental obstacles
Barriers are MBO and performance appraisal
These increase internal destructive competition
Reduces risk taking
13. Institute vigorous program of education and retraining
Training and retraining must continue
Commitment to permanent employment
14. Define top managements permanent commitment to ever improving quality and
productivity
PLAN
DO
STUDY (CHECK)
ACT
Plan
Plan the route of action
Decision based on objectives, changes needed, performance measures, persons
responsible, availability of resources
Do
Involvement of everyone
Training, survey of customers, identification of core process
Small scale implementation of planned change
Study (Check)
Measuring and observing the effects, analysis of results and feedback
Deviations from the original plan should be evaluated
Act
Take corrective steps
Standardize the improvement
Requires long range planning, company wide training, good coordination, top
management commitment, etc.
Source: Dale Besterfield, et.al., Total Quality Management, pearson education, third edition,
2005
Quality Costs
The value of quality must be based on its ability to contribute to profits. The efficiency of
business is measured in terms of money it earns. This cost is no different than other costs. It
is the sum of the money that the organization spends in ensuring that the customer
requirements are met on a continual basis and also the costs wasted through failing to achieve
the desired level of quality.
Test and inspection of incoming materials Cost of field servicing and handling
Test and inspection of in-process goods complaints
Final product testing and inspection Warranty repairs and replacements
Supplies used in testing and inspection Repairs and replacements beyond the
Supervision of testing and inspection activities warranty period
Depreciation of test equipment Product recalls
Maintenance of test equipment Liability arising from defective products
Plant utilities in the inspection area Returns and allowances arising from quality
Field testing and appraisal at customer site problems
Lost sales arising from a reputation for poor
quality.
The following graph gives the relationship between conformance and non-conformance costs.
Basic Statistics
Mean, Mode, Median, and Standard Deviation
Statistics is the practice of collecting and analyzing data. The analysis of statistics is
important for decision making in events where there are uncertainties.
Measures of Central Tendency:
A good way to begin analyzing data is to summarize the data into a single representative
value.
The three most common measures of central tendency are mean, median and mode.
The sample mean is the average and is computed as the sum of all the observed
outcomes from the sample divided by the total number of events. We use x as the
symbol for the sample mean. In math terms,
where n is the sample size and the x correspond to the observed value.
The mode of a set of data is the number with the highest frequency, one that occurs
maximum number of times.
One problem with using the mean, is that it often does not depict the typical outcome. If
there is one outcome that is very far from the rest of the data, then the mean will be strongly
affected by this outcome. Such an outcome is called an outlier.
An alternative measure is the median. The median is the middle score. If we have an even
number of events we take the average of the two middles. The median is better for describing
the typical value. It is often used for income and home prices.
Example
Suppose you randomly selected 10 house prices. You are interested in the typical house
price. In lakhs the prices are 2.7, 2.9, 3.1, 3.4, 3.7, 4.1, 4.3, 4.7, 4.7, 40.8
If we computed the mean, we would say that the average house price is 744,000. Although
this number is true, it does not reflect the price for available housing in Central Mangaluru.
A closer look at the data shows that the house valued at
40.8 x 100,000 = 40.8 Lakhs skews the data. Instead, we use the median. Since there is an
even number of outcomes, we take the average of the middle two (3.7 + 4.1)/2 = 3.9.
Therefore, the median house price is 390,000. This better reflects what a house shopper
should have to buy a house.
Relation between mean, mode and median is Mean Mode = 3 (Mean-Median)
For example, a pharmaceutical engineer develops a new drug that regulates iron in the blood.
Suppose she finds out that the average sugar content after taking the medication is the
optimal level. This does not mean that the drug is effective. There is a possibility that half of
the patients have dangerously low sugar content while the other half has dangerously high
content. Instead of the drug being an effective regulator, it is a deadly poison. What the
pharmacist needs is a measure of how far the data is spread apart. This is what the variance
and standard deviation do.
Measures of Dispersion:
Dispersion gives information about how spread out the values are in the data set.
Common measures of dispersion are range, standard deviation
Range:
Definition: The range in a data set measures the difference between the smallest entry
value and the largest entry value.
Formula: Range = (largest entry value - smallest entry value)
Standard Deviation:
Definition: Standard deviation measures the variation or dispersion exists from the
mean.
A low standard deviation indicates that the data points tend to be very close to the
mean, whereas high standard deviation indicates that the data points are spread over a
large range of values.
Illustration:
The owner of a restaurant is interested in how much people spend at the restaurant. He
examines 10 randomly selected receipts for parties of four and writes down the following
data.
44, 50, 38, 96, 42, 47, 40, 39, 46, 50
Mean = 49.2
Illustration 2:
2). Table shows the number of goals scored in a game of football shootout (each person
gets 5 kicks at the goal) by students in a class.
Goal 012345
Frequency 149853
a). Calculate the mean and median.
Illustration 3:
During rule 2, we examine the previous result and take action to counteract the
motion of the marble. We correct for the error of the previous drop. If the marble
rolled 2 inches northeast, we position the funnel 2 inches to the southwest of where it
last was.
A common example is worker adjustments to machinery. A worker may be working
to make a unit of uniform weight. If the last item was 2 pounds underweight, increase
the setting for the amount of material in the next item by 2 pounds.
Other examples include taking action to change policies and production levels based
upon on last months budget variances, profit margins, and output.
A possible flaw in rule 2 was that it adjusted the funnel from its last position, rather
than relative to the target. If the marble rolled 2 inches northeast last time, we should
set the funnel 2 inches southwest of the target. Then when the marble again rolls 2
inches northeast, it will stop on the target. The funnel is set at an equal and opposite
direction from the target to compensate for the last error.
We see rule 3 at work in systems where two parties react to each others actions.
Their goal is to maintain parity. If one country increases its nuclear arsenal, the rival
country increases their arsenal to maintain the perceived balance.
A common example provided in economics courses is agriculture. A drought occurs
one year causing a drop in crop output. Prices rise, causing farmers to plant more
crop next year. In the next year, there are surpluses, causing the price to drop.
Farmers plant less next year. The cycle continues
In an attempt to reduce the variability of the marble drops, we decide to allow the
marble to fall where it wants to. We position the funnel over the last location of the
marble, as that appears to be the tendency of where the marble tends to stop.
A common example of Rule 4 is when we want to cut lumber to a uniform length.
We use the piece we just cut in order to measure the location of the next cut.
Other examples of Rule 4 include:
Brainstorming (without outside help)
Adjusting starting time of the next meeting based upon actual starting time of the last
meeting
Benchmarking, in order to find examples to follow A message is passed from one
person to the next, who repeats it to another person, and so forth.
The junior worker trains the next new worker, who then trains the next, and so forth.
Tampering
Rules 2, 3, and 4 are all examples of process tampering. We take action (dont just
stand there - do something!) as a result of the most recent result.
Rule 2 leads to a uniform circular pattern, whose size is 40% bigger than the Rule 1
circle. This is because the error in distance from the funnel is independent from one
marble drop to the next. In positioning the funnel relative to the previous marble
drop, we add the error from the first drop (by repositioning the funnel) to the second
drop (the error in the marble).
The standard deviation of adding n independent random variables is the square root of
n times the standard deviation of the individual. So the combined standard deviation
is 1.4 times the original standard deviation. Note, this statistical principle is a
standard question that appears on every Certified Quality Engineer exam in some
form or another.
The problems of Rule 2 are corrected with dead bands in automated feedback
mechanisms and better calibration programs. We wait for a certain error to build up
before taking action. But how is the dead band determined? A control chart provides
the answer. Plot the results on a control chart, and recalibrate (or give a feedback
signal) when a statistically significant change is detected. Program dead bands
approximate the control chart action.
Rules 3 and 4 tend to blow up. In rule 3, results swing back and forth with greater
and greater oscillations from the target. In rule 4, the funnel follows a drunken walk
off the edge of the table. In both cases, errors accumulate from one correction to
the next, and the marble (or system) heads off to infinity. Rules 3 and 4 represent
unstable systems, with over-corrections tending to occur.
Final words
Schemes to control the location of the funnel should be control chart based. In
addition, we may have to think outside of the box to fix this system.
If we lowered the height of the funnel, we would fundamentally reduce the variation
in the process.
If we added more layers of cloth or paper to cushion the marbles landing, then the
marble would roll less. The impact of these changes would be detected by the control
chart, and would prove whether or not an improvement did occur.
If the population from which samples are taken is NOT normal, the distribution of
SAMPLE AVERAGES will tend toward normality provided that sample size, n, is at
least 4.
Tendency gets better as n
Standardized normal for distribution of averages
Figure 1
Figure 2
Figure 3 shows a normal distribution with a mean of 75 and a standard deviation of
10. The shaded area contains 95% of the area and extends from 55.4 to 94.6.
For all normal distributions, 95% of the area is within 1.96 standard deviations of the
mean.
For quick approximations, it is sometimes useful to round off and use 2 rather than
1.96 as the number of standard deviations you need to extend from the mean so as to
include 95% of the area
Figure 3
Z Score
Z-scores are expressed in terms of standard deviations from their means. Resultantly,
these z-scores have a distribution with a mean of 0 and a standard deviation of 1. The
formula for calculating the standard score is given below:
clearly we can see that Sharath did better than a large proportion of students, with
74.86% of the class scoring lower than his.
However, the key finding is that Sharath's score was not one of the best marks. It
wasn't even in the top 10% of scores in the class How? Let us See
A better way of phrasing second question would be to ask: What mark would a
student have to achieve to be in the top 10% of the class and qualify for the advanced
English Literature class?
If we refer to our frequency distribution below, we are interested in the area to the
right of the mean score of 60 that reflects the top 10% of marks. As a decimal, the top
10% of marks would be those marks above 0.9 (i.e., 100% - 90% = 10% or 1 - 0.9 =
0.1). In this case, we need to do the exact reverse to find our z-score.
This forms the second part of the z-score. Putting these two values together, the z-
score for 0.8997 is 1.28 (i.e., 1.2 + 0.08 = 1.28)
Standard
Score Mean z-score
Deviation
(X) s Z
? 60 15 1.282
Interpretation of Z score
A z-score less than 0 represents an element less than the mean.
A z-score greater than 0 represents an element greater than the mean.
A z-score equal to 0 represents an element equal to the mean.
A z-score equal to 1 represents an element that is 1 standard deviation greater than the
mean; a z-score equal to 2, 2 standard deviations greater than the mean; etc.
A z-score equal to -1 represents an element that is 1 standard deviation less than the
mean; a z-score equal to -2, 2 standard deviations less than the mean; etc.
If the number of elements in the set is large, about 68% of the elements have a z-score
between -1 and 1; about 95% have a z-score between -2 and 2; and about 99% have a
z-score between -3 and 3.
Check sheets
Check sheets are a simple way of gathering data so that decisions can be based on facts,
rather than anecdotal evidence. Figure 4 shows a checklist used to determine the causes of
defects in a hypothetical assembly process. It indicates that "not-to-print" is the biggest cause
of defects, and hence, a good subject for improvement. Checklist items should be selected to
be mutually exclusive and to cover all reasonable categories. If too many checks are made in
the "other" category, a new set of categories is needed.
They could also be used to relate the number of defects to the day of the week to see if there
is any significant difference in the number of defects between workdays. Other possible
column or row entries could be production line, shift, product type, machine used, operator,
etc., depending on what factors are considered useful to examine. So long as each factor can
be considered mutually exclusive, the chart can provide useful data. An Ishikwa Diagram
may be helpful in selecting factors to consider. The data gathered in a checklist can be used
as input to a Pareto chart for ease of analysis.
Pareto Charts
Alfredo Pareto was an economist who noted that a few people controlled most of a nations
wealth. "Paretos Law" has also been applied to many other areas, including defects, where a
few causes are responsible for most of the problems. Separating the "vital few" from the
"trivial many" can be done using a diagram known as a Pareto chart. Figure below shows the
data from the checklist shown in above Figure organized into a Pareto chart.
Stratification is simply the creation of a set of Pareto charts for the same data, using different
possible causative factors. For example, Figure below plots defects against three possible
sets of potential causes. The figure shows that there is no significant difference in defects
between production lines or shifts, but product type three has significantly more defects than
do the others. Finding the reason for this difference in number of defects could be
worthwhile.
Ishikawa diagrams are named after their inventor, Kaoru Ishikawa. They are also called
fishbone charts, after their appearance, or cause and effect diagrams after their function. Their
function is to identify the factors that are causing an undesired effect (e.g., defects) for
improvement action, or to identify the factors needed to bring about a desired result (e.g., a
winning proposal). The factors are identified by people familiar with the process involved. As
a starting point, major factors could be designated using the "four M's": Method, Manpower,
Material, and Machinery; or the "four P's": Policies, Procedures, People, and Plant. Factors
can be subdivided, if useful, and the identification of significant factors is often a prelude to
the statistical design of experiments.
Above figure is a partially completed Ishikawa diagram attempting to identify potential
causes of defects in a wave solder process.
Graphs
Graphs come in many types, a type is usually better fitting a specific purpose. According to
the situation to analyze or information to share, the choice of the most suitable type of graph,
and beyond the type, scale and other parameters of the graph will highlight or hide certain
aspects.
The first type of graph, maybe the most
common are line charts, lines joining
plots and each plot is the graphical
depiction of a pair of coordinates and
those coordinates are the translation of
specific parameters to check, e.g. km or
miles per hour (speed), temperature
over time, units per hour a day as above,
etc.
A radar chart is a graphical method of displaying multivariate data in the form of a two-
dimensional chat of three or more quantitative variables represented on axes starting from the
same point. The relative position and angle of the axes is typically uninformative.
For example above graph predicts the performance of the company with respect to its usage
of budget. The total area covered by the either blue shape or red shape will give clear picture.
Similarly, we can plot the performance of two or more companies under the parameters
mentioned above and identify the best company base on the area covered.
Control Charts
Control charts are the most complicated of the seven basic tools of TQM, but are based on
simple principles. The charts are made by plotting in sequence the measured values of
samples taken from a process. For example, the mean length of a sample of rods from a
production line, the number of defects in a sample of a product, the miles per gallon of
automobiles tested sequentially in a model year, etc. These measurements are expected to
vary randomly about some mean with a known variance. From the mean and variance,
control limits can be established. Control limits are values that sample measurements are not
expected to exceed unless some special cause changes the process. A sample measurement
outside the control limits therefore indicates that the process is no longer stable, and is
usually reason for corrective action.
Other causes for corrective action are non-random behavior of the measurements within the
control limits. Control limits are established by statistical methods depending on whether the
measurements are of a parameter, attribute or rate.
Histograms
Histograms are another form of bar chart in which measurements are grouped into bins; in
this case each bin representing a range of values of some parameter. For example, in Figure
below, X could represent the length of a rod in inches. The figure shows that most rods
measure between 0.9 and 1.1 inches. If the target value is 1.0 inches, this could be good
news. However, the chart also shows a wide variance, with the measured values falling
between 0.5 and 1.5 inches. This wide a range is generally a most unsatisfactory situation
Besides the central tendency and spread of the data, the shape of the histogram can also be of
interest.
Scatter diagrams
Scatter diagrams are a graphical, rather than statistical, means of examining whether or not
two parameters are related to each other. It is simply the plotting of each point of data on a
chart with one parameter as the x-axis and the other as the y-axis. If the points form a narrow
"cloud" the parameters are closely related and one may be used as a predictor of the other. A
wide "cloud" indicates poor correlation. Figure below shows a plot of defect rate vs.
temperature with a strong positive correlation,
It should be noted that the slope of a line drawn through the center of the cloud is an artefact
of the scales used and hence not a measure of the strength of the correlation. Unfortunately,
the scales used also affect the width of the cloud, which is the indicator of correlation. When
there is a question on the strength of the correlation between the two parameters, a correlation
coefficient can be calculated. This will give a rigorous statistical measure of the correlation
ranging from -1.0 (perfect negative correlation), through zero (no correlation) to +1.0 (perfect
correlation).
The control limits as pictured in the graph might be 0.001 probability limits. If so, and if
chance causes alone were present, the probability of a point falling above the upper limit
would be one out of a thousand, and similarly, a point falling below the lower limit would be
one out of a thousand. We would be searching for an assignable cause if a point would fall
outside these limits. Where we put these limits will determine the risk of undertaking such a
search when in reality there is no assignable cause for variation.
Since two out of a thousand is a very small risk, the 0.001 limits may be said to give practical
assurances that, if a point falls outside these limits, the variation was caused be an assignable
cause. It must be noted that two out of one thousand is a purely arbitrary number. There is no
reason why it could not have been set to one out a hundred or even larger. The decision
would depend on the amount of risk the management of the quality control program is willing
to take. In general (in the world of quality control) it is customary to use limits that
approximate the 0.002 standard.
Letting X denote the value of a process characteristic, if the system of chance causes
generates a variation in X that follows the normal distribution, the 0.001 probability limits
will be very close to the 3 limits. From normal tables we glean that the 3 in one direction is
0.00135, or in both directions 0.0027. For normal distributions, therefore, the 3 limits are
the practical equivalent of 0.001 probability limits.
Control charts fall into two categories: Variable and Attribute Control Charts.
Variable data are data that can be measured on a continuous scale such as a
thermometer, a weighing scale, or a tape rule.
Attribute data are data that are counted, for example, as good or defective, as
possessing or not possessing a particular characteristic.
Variables answer the question how much? and are measured in quantitative units,
for example weight, voltage or time.
Attributes answer the question how many? and are measured as a count, for
example the number of defects in a batch of products.
The type of control chart you use will depend on the type of data you are working with.
It is always preferable to use variable data.
Variable data will provide better information about the process than attribute data.
Additionally, variable data require fewer samples to draw meaningful conclusions.
As the data points for each subgroup are plotted, the points are connected to the previous
point and the charts are interpreted to determine if one of the out-of-control patterns has
occurred.
Typically, only one of the charts will go out-of-control at any one time.
Remember to add a comment on a chart to indicate the action taken to correct an out-
of-control situation.
u-Chart
Similar to a c-chart, the u-chart is used to track the total count of defects per unit (u) that
occur during the sampling period and can track a sample having more than one defect.
However, unlike a c-chart, a u-chart is used when the number of samples of each sampling
period may vary significantly.
Example of u-Chart
np-Chart
Use an np-chart when identifying the total count of defective units (the unit may have one or
more defects) with a constant sampling size.
Example of np-Chart
p-Chart
Used when each unit can be considered pass or fail no matter the number of defects a p-
chart shows the number of tracked failures (np) divided by the number of total units (n).
Example of p-Chart
Notice that no discrete control charts have corresponding range charts as with the variable
charts. The standard deviation is estimated from the parameter itself (p, u or c); therefore, a
range is not required.
A number of points may be taken into consideration when identifying the type of control
chart to use, such as:
Variables control charts (those that measure variation on a continuous scale) are more
sensitive to change than attribute control charts (those that measure variation on a
discrete scale).
Variables charts are useful for processes such as measuring tool wear.
Use an individuals chart when few measurements are available (e.g., when they are
infrequent or are particularly costly). These charts should be used when the natural
subgroup is not yet known.
A measure of defective units is found with u and c-charts.
In a u-chart, the defects within the unit must be independent of one another, such as
with component failures on a printed circuit board or the number of defects on a
billing statement.
Use a u-chart for continuous items, such as fabric (e.g., defects per square meter of
cloth).
A c-chart is a useful alternative to a u-chart when there are a lot of possible defects on
a unit, but there is only a small chance of any one defect occurring (e.g., flaws in a
roll of material).
When charting proportions, p and np-charts are useful (e.g., compliance rates or
process yields).
There are many things that can dictate the size of your sample. Let's start by figuring out the
ideal sample size, the one that you would have if you lived in a perfect world. Then, we'll
look at how real-world issues can play a role in determining what that sample size actually
ends up being.
In general, a larger sample size is better. Why is this? Well, all research is interested in asking
inferences about the population at large. The larger the sample size, the closer you are to
having everyone in the population in your study.
For example, what would happen if you decided to do your coffee study on just three people?
Maybe one of them is taking a drug that interacts with caffeine. As a result, when this person
drinks coffee, they don't really get any more energetic or productive.
In your study, one-third of the sample has no reaction to coffee due to that drug. But in the
actual population, maybe only two or three percent of people take this drug. Your study
makes it look like a lot of people don't react to coffee because of the drug. Your results are
not accurate.
Inaccuracy due to a difference in the sample and the population is called error in research. A
larger sample size reduces error. If, for example, you increased your sample size from three
people to three hundred people, it is less likely that one-third of your sample will be taking
the drug that makes them less sensitive to caffeine.
Sampling frequency indicates how often the samples need to be considered for studies. In
designing a control chart, both the sample size to be selected and the frequency of selection
must be specified. Larger samples make it easier to detect small shifts in the process. Current
practice tends to favor smaller, more frequent samples.
number of plotted points depends on whether we are finding the in-control run length or the
out-of-control run length.The in-control run length measures the number of plotted points
from the beginning of the monitoring period until an out-of-control signal, given that there
have been no changes in the process. We want the average in-control run length to be high.
The out-of-control run length measures the number of plotted points from the time of
a process change until an out-of-control signal is given. Its value depends upon the
size of the shift. We want the average out-of-control run length to be small.
Assuming that the statistic being plotted is independent over time (i.e., one plotted
point is independent of other plotted points), the run length follows a geometric
distribution.
The average run length (ARL) is the average number of points plotted on the chart
until an out-of-control condition is signaled. It is the expected value of the run length
distribution. It is related to the OC curve as follows:
1
ARL .
1 Pa
Where is the probability of a Type I error and the probability of a Type II error.
Consider a problem with control limits set at 3standard deviations from the mean.
The probability that a point plots beyond the control limits is again, 0.0027 (i.e., p =
0.0027). Then the average run length is
1
ARL 370
0.0027
What does the ARL tell us?
The average run length gives us the length of time (or number of samples) that should
plot in control before a point plots outside the control limits.
For our problem, even if the process remains in control, an out-of-control signal could
be generated every 370 samples, on average.
Rational Subgroups
The key to successful control charts is the formation of rational subgroups. Control charts
rely upon rational subgroups to estimate the short-term variation in the process. This short-
term variation is then used to predict the longer-term variation defined by the control limits,
which differentiate between common and special causes of variation.
A rational subgroup is simply a sample in which all of the items are produced under
conditions in which only random effects are responsible for the observed variation
Subgroups or samples should be selected so that if assignable causes are present, the
chance for differences between subgroups will be maximized, while the chance for
differences due to these assignable causes within a subgroup will be minimized.
1. The observations within a subgroup are from a single, stable process. If subgroups
contain the elements of multiple process streams, or if other special causes occur
frequently within subgroups, then the within subgroup variation will be large relative
to the variation between subgroup averages. This large within subgroup variation
forces the control limits to be too far apart, resulting in a lack of sensitivity to process
shifts. Western Electric Run Test 7 (15 successive points within one sigma of center
line) is helpful in detecting this condition.
2. The subgroups are formed from observations taken in a time-ordered sequence. In
other words, subgroups cannot be randomly formed from a set of data (or a box of
parts); instead, the data comprising a subgroup must be a "snapshot" of the process
over a small window of time, and the order of the subgroups would show how those
snapshots vary in time (like a "movie"). The size of the "small window of time" is
determined on an individual process basis to minimize the chance of a special cause
occurring in the subgroup.
3. The observations within the subgroups are independent, implying that no observation
influences, or results from, another. If observations are dependent on one another, the
process has autocorrelation (also known as serial correlation). In many cases, the
autocorrelation causes the within subgroup variation to be unnaturally small and a
poor predictor of the between subgroup variation. The small within subgroup
variation forces the control limits to be too narrow, resulting in frequent out of control
conditions, leading to the tampering.
A control chart may indicate an out-of-control condition either when one or more points fall
beyond the upper and lower control limits or when then plotted point exhibit some
nonrandom pattern.
Run: A run is sequence of observations of the same type. When we have 4 or more points in a
row increase in magnitude or decrease in magnitude, this arrangement of points is called run.
Several criteria may be applied simultaneously to a control chart to determine whether the
process is out of control. The basic criteria is one or more points outside of the control limits.
The supplementary criteria are sometimes used to increase the sensitivity of the control charts
to a small process shift so that one may response more quickly to the assignable cause. Some
sensitizing rules for Shewhart control charts are as follows:
1) One or more points plot outside the control limits.
2) Two out of the three consecutive points outside the 2-sigma warning limits but still
inside the control limits.
3) Four of five consecutive points beyond the 1-sigma limits.
4) A run of eight consecutive points on one side of the center.
5) Six points in a row steadily increasing or decreasing.
6) 15 points in a row in zone C (both above and below the center line).
7) 14 points in a row alternating up and down.
8) 8 points in a row in both sides of the center line with none in zone C.
9) An unusual or nonrandom pattern in the data.
10) One or more points near a warning or control limit.
Among the 10 rules, first four are called the Western Electric Rules (1956)
The control chart is most effective when integrated into a comprehensive SPC program.The
seven major SPC problem-solving tools should be used routinely to identify improvement
opportunities. The seven major SPC problem-solving tools should be used to assist in
reducing variability and eliminating waste.
All natural processes are affected by intrinsic variation. In nature, no matter how hard we try,
there can never be two identical actions that generate exactly the same result. This simple
statement contains a deeper truth that is connected with change and entropy (a measure of
disorder in the environment). Change is a constant in nature that is not only necessary but
literally vital. There would be no life without change.
How does this intrinsic characteristic of nature affect the activities of an organization?
In order to have information that can be used to make the right decisions, we need a technical
tool and the right mindset. We find both of these within Statistical Process Control (SPC),
introduced by W. Shewhart in the first half of the 20th century. This period saw the birth of
the Quality movement, first as a philosophy for production management, and later as a
general approach for organizations. It would be a mistake to consider SPC as a technicality.
As the founding father of Quality, Dr. Deming used to say, SPC is not simply a technique but
a way of thinking.
Variation in the production process leads to quality defects and lack of product consistency.
Categories of variation
Within-piece variation
One portion of surface is rougher than another portion.
Apiece-to-piece variation
Variation among pieces produced at the same time.
Time-to-time variation
Service given early would be different from that given later in the day.
Sources of Variation
Equipment
Tool wear, machine vibration,
Material
Raw material quality
Environment
Temperature, pressure, humadity
Operator
Operator performs- physical & emotional
Variation in a process occurs due to Common or chance causes and Assignable causes
Chance causes - common cause
inherent to the process or random and not controllable
if only common cause present, the process is considered stable or in control
Assignable causes - special cause
variation due to outside influences
if present, the process is out of control
If you look at bottles of a soft drink in a grocery store, you will notice that no two bottles are
filled to exactly the same level. Some are filled slightly higher and some slightly lower.
These types of differences are completely normal. No two products are exactly alike because
of slight differences in materials, workers, machines, tools, and other factors. These are called
common, or random, causes of variation. Common causes of variation are based on random
causes that we cannot identify. These types of variation are unavoidable and are due to slight
differences in processing.
The second type of variation that can be observed involves variations where the causes can be
precisely identified and eliminated. These are called assignable causes of variation. Examples
of this type of variation are poor quality in raw materials, an employee who needs more
training, or a machine in need of repair. In each of these examples the problem can be
identified and corrected. Also, if the problem is allowed to persist, it will continue to create a
problem in the quality of the product. In the example of the soft drink bottling operation,
bottles filled with 15.6 ounces of liquid would signal a problem. The machine may need to be
readjusted. This would be an assignable cause of variation. We can assign the variation to a
particular cause (machine needs to be readjusted) and we can correct the problem.
Generally, collect 20-25 subgroups (100 total samples) before calculating the control
limits.
Each time a subgroup of sample size n is taken, an average is calculated for the
subgroup and plotted on the control chart.
V. Determine center line
The centerline should be the population mean,
Since it is unknown, we use X Double bar, or the grand average of the subgroup
averages.
VI. Determine control limits
The normal curve displays the distribution of the sample averages.
A control chart is a time-dependent pictorial representation of a normal curve.
Processes that are considered under control will have 99.73% of their graphed
averages fall within 6.
Our objectives for this section are to learn how to use control charts to monitor
continuous data. We want to learn the assumptions behind the charts, their
application, and their interpretation.
Since statistical control for continuous data depends on both the mean and the
variability, variables control charts are constructed to monitor each. The most
commonly used chart to monitor the mean is called the X chart. There are two
commonly used charts used to monitor the variability: the R chart and the s chart.
3. Compute X and R (or s) for each sample, and plot them on their respective control
charts. Use the following relationships:
n n
Xi ( Xi X ) 2
X i 1 , R = Xmax - Xmin, s i 1 .
n n 1
k k k
Xj Rj sj
j 1 j 1 j 1
X , R , s .
k k k
5. If any points fall outside of the control limits, conclude that the process is out of
control, and begin a search for an assignable or special cause. When the special cause
is identified, remove that point and return to step 4 to re-evaluate the remaining
points.
6. If all the points are within limits, conclude that the process is in control, and use
the calculated limits for future monitoring of the process.
Because the limits of the X chart are based on the variability of the process, we will
first discuss the variability charts. I suggest that you first determine if the R chart (or s chart)
shows a lack of control. If so, you cannot draw conclusions from the X chart.
The R chart
The R chart is used to monitor process variability when sample sizes are small (n<10),
or to simplify the calculations made by process operators.
This chart is called the R chart because the statistic being plotted is the sample range.
R
Using the R chart, the estimate of the process standard deviation, , is .
d2
The s chart
The s chart is used to monitor process variability when sample sizes are large (n10),
or when a computer is available to automate the calculations.
This chart is called the s chart because the statistic being plotted is the
sample standard deviation.
s
Using the s chart, the estimate of the process standard deviation, , is .
c4
The X Chart:
This chart is called the X chart because the statistic being plotted is the sample mean.
The reason for taking a sample is because we are not always sure of the process
distribution. By using the sample mean we can "invoke" the central limit theorem to
assume normality.
Range X
Not Known X A2 R
Standard Deviation X
Known A
Standard Deviation X
Not Known X A3 s
centerline=d2
Range R
Known LCL=D1
UCL=D2
centerline= R
Range R
Not Known LCL=D3 R
UCL=D4 R
Standard Deviation
centerline=c4
s
Known LCL=B5
UCL=B6
Standard Deviation
centerline= s
s
Not Known LCL=B3 s
UCL=B4 s
Illustration
5.02
5.00 CL
4.98
4.96 LCL
4.94
0 1 2 3 4 5 6 7 8 9 10 11
Subgroup
0.25 UCL
0.20
0.15
Range
CL
0.10
0.05
LCL
0.00
0 1 2 3 4 5 6 7 8 9 10 11
Subgroup
INTRODUCTION:
Process capability is the ability of the process to meet the design specifications for a service
or product. Nominal value is a target for design specifications. Tolerance is an allowance
above or below the nominal value.
Traditional capability rates are calculated when a product or service feature is measured
through a quantitative continuous variable, assuming the data follows a normal probability
distribution. A normal distribution features the measurement of a mean and a standard
deviation, making it possible to estimate the probability of an incident within any data set.
The most interesting values relate to the probability of data occurring outside of customer
specifications. These are data appearing below the lower specification limit (LSL) or above
the upper specification limit (USL). An ordinary mistake lies in using capability studies to
deal with categorical data, turning the data into rates or percentiles. In such cases,
determining specification limits becomes complex. For example, a billing process may
generate correct or incorrect invoices. These represent categorical variables, which by
definition carry an ideal USL of 100 percent error free processing, rendering the traditional
statistical measures (Cp, Cpk, Pp and Ppk) inapplicable to categorical variables.
When working with continuous variables, the traditional statistical measures are quite useful,
especially in manufacturing. The difference between capability rates (Cp and Cpk) and
performance rates (Pp and Ppk) is the method of estimating the statistical population standard
deviation. The difference between the centralized rates (Cp and Pp) and unilateral rates (Cpk
and Ppk) is the impact of the mean decentralization over process performance estimates.
The following example details the impact that the different forms of calculating capability
may have over the study results of a process. A company manufactures a product thats
acceptable dimensions, previously specified by the customer, range from 155 mm to 157 mm.
The first 10 parts made by a machine that manufactures the product and works during one
period only were collected as samples during a period of 28 days. Evaluation data taken from
these parts was used to make a Xbar-S control chart
This chart presents only common cause variation and as such, leads to the conclusion that the
process is predictable. Calculation of process capability presents the results in Figure 2.
Figure 2: Process Capability of Dimension
Calculating Cp
The Cp rate of capability is calculated from the formula:
where s represents the standard deviation for a population taken from , with s-bar
representing the mean of deviation for each rational subgroup and c4 representing a statistical
coefficient of correction.
In this case, the formula considers the quantity of variation given by standard deviation and
an acceptable gap allowed by specified limits despite the mean. The results reflect the
populations standard deviation, estimated from the mean of the standard deviations within
the subgroups as 0.413258, which generates a Cp of 0.81.
Rational Subgroups
A rational subgroup is a concept developed by Shewart while he was defining control
graphics. It consists of a sample in which the differences in the data within a subgroup are
minimized and the differences between groups are maximized. This allows a clearer
identification of how the process parameters change along a time continuum. In the example
above, the process used to collect the samples allows consideration of each daily collection as
a particular rational subgroup.
a consequence, presents a higher possibility of not reaching the process capability targets. In
the example above, specification limits are defined as 155 mm and 157 mm. The mean
(155.74) is closer to one of them than to the other, leading to a Cpk factor (0.60) that is lower
than the Cp value (0.81). This implies that the LSL is more difficult to achieve than the USL.
Non-conformities exist at both ends of the histogram.
Estimating Pp
Similar to the Cp calculation, the performance Pp rate is found as follows:
Once more it becomes clear that this estimate is able to diagnose decentralization problems,
aside from the quantity of process variation. Following the tendencies detected in Cpk, notice
that the Pp value (0.76) is higher than the Ppk value (0.56), due to the fact that the rate of
discordance with the LSL is higher. Because the calculation of the standard deviation is not
related to rational subgroups, the standard deviation is higher, resulting in a Ppk (0.56) lower
than the Cpk (0.60), which reveals a more negative performance projection.
k
xi
i 1 the total number of defective items in all the samples taken
p k
=
the total number of items sampled
ni
i 1
The process is out of control since two points are above UCL The point below LCL is not
considered as it yields better result than anticipated.
Control Chart for Defects (c-Chart)
Consider the occurrence of defects in an inspection of product(s). Suppose that defects occur
in this inspection according to Poisson distribution; that is
Where x is the number of defects and c is known as mean and/or variance of the Poisson
distribution.
When the mean number of defects c in the population is not known. Let us select n samples.
If there are ci defects in ith sample, then the average of these defects in samples of size n is
Note: If this calculation yields a negative value of LCL then set LCL=0.
Illustration
The following dataset refers to the number of holes (defects) in knitwears.
u
c
u
c
n n
u u
UCL u 3 LCL u 3
n n
In case of U chart the UCL and LCL for each day is calculated and plotted in graph.
1.20
UCLJan 30 1.20 3 1.51
110
1.20
LCL Jan 30 1.20 3 0.89
110
Before final shipment, a quality inspector evaluates auto supply parts and rates each
item as pass or fail to ensure that the company does not ship any parts that will be
unusable.
Note
The term "nonconformity" is sometimes used to signify a defect. The term
"nonconforming" is sometimes used to signify a defective.
number of nonconforming np p (1 p )
UCL p 3
items per sample n
p (1 p )
LCL max 0, p 3
n
proportion of nonconforming p
UCL np 3 np (1 p )
items per sample
LCL max 0, np 3 np (1 p )
area of opportunity
LCL= max 0, c 3 c
u
number of nonconformities per u UCL= u 3
a
unit area of opportunity u
LCL= max 0, u 3
a
43
The second level of quality is the lot tolerance proportion defective (LTPD), or the worst
level of quality that the consumer can tolerate. The LTPD is a definition of bad quality that
the consumer would like to reject. Recognizing the high cost of defects, operations managers
have become more cautious about accepting materials of poor quality from suppliers. Thus,
sampling plans have lower LTPD values than in the past. The probability of accepting a lot
with LTPD quality is the consumers risk (), or the type II error of the plan. A common
value for the consumers risk is 0.10, or 10 percent.
Sampling Plans
All sampling plans are devised to provide a specified producers and consumers risk.
However, it is in the consumers best interest to keep the average number of items inspected
(ANI) to a minimum because that keeps the cost of inspection low. Sampling plans differ
with respect to ANI. Three often-used attribute sampling plans are the single-sampling plan,
the double-sampling plan, and the sequential-sampling plan. Analogous plans also have been
devised for variable measures of quality.
Based on the number of samples required for a decision. These include:
Single-sampling plans
Double-sampling plans
Multiple-sampling plans
Sequential-sampling plans
Single-, double-, multiple-, and sequential sampling plans can be designed to produce
equivalent results. Factors to consider include:
Administrative efficiency
Type of information produced by the plan
Average amount of inspection required by plan
Impact of the procedure on manufacturing flow
Single-Sampling Plan The single-sampling plan is a decision rule to accept or reject a lot
based on the results of one random sample from the lot. The procedure is to take a random
sample of size (n) and inspect each item. If the number of defects does not exceed a specified
acceptance number (c), the consumer accepts the entire lot. Any defects found in the sample
are either repaired or returned to the producer. If the number of defects in the sample is
greater than c, the consumer subjects the entire lot to 100 percent inspection or rejects the
entire lot and returns it to the producer. The single-sampling plan is easy to use but usually
results in a larger ANI than the other plans. After briefly describing the other sampling plans,
we focus our discussion on this plan.
number of defectives is less than a certain acceptance number (c1), the consumer accepts the
lot. If the number is greater than another acceptance number (c2), the consumer rejects the
lot. If the number is somewhere between the two, another item is inspected. Figure 7.1
illustrates a decision to reject a lot after examining the 40th unit. Such charts can be easily
designed with the help of statistical tables that specify the accept or reject cut-off values as a
function of the cumulative sample size.
Fig. 7.1 Sequential sampling plan
The ANI is generally lower for the sequential-sampling plan than for any other form of
acceptance sampling, resulting in lower inspection costs. For very low or very high values of
the proportion defective, sequential sampling provides a lower ANI than any comparable
sampling plan. However, if the proportion of defective units falls between the AQL and the
LTPD, a sequential-sampling plan could have a larger ANI than a comparable single- or
double-sampling plan (although that is unlikely). In general, the sequential-sampling plan
may reduce the ANI to 50 percent of that required by a comparable single-sampling plan and,
consequently, save substantial inspection costs.
A typical OC curve for a single-sampling plan, plotted in red, shows the probability a of
rejecting a good lot (producers risk) and the probability of accepting a bad lot (consumers
risk). Consequently, managers are left with choosing a sample size n and an acceptance
number to achieve the level of performance specified by the AQL, , LTPD, and .
Constructing an OC Curve
The Noise King Muffler Shop, a high-volume installer of replacement exhaust muffler
systems, just received a shipment of 1,000 mufflers. The sampling plan for inspecting these
mufflers calls for a sample size and an acceptance number . The contract with the muffler
manufacturer calls for an AQL of 1 defective muffler per 100 and an LTPD of 6 defective
mufflers per 100. Calculate the OC curve for this plan, and determine the producers risk and
the consumers risk for the plan.
SOLUTION
c=1
n = 60
Let p = 0.01. Then multiply n by p to get 60(0.01) = 0.60. Locate 0.60 in Table G. Move to
the right until you reach the column for . Read the probability of acceptance: 0.878. Repeat
this process for a range of p values. The following table contains the remaining values for the
OC curve.
DECISION POINT
Note that the plan provides a producers risk of 12.2 percent and a consumers risk of 12.6
percent. Both values are higher than the values usually acceptable for plans of this type (5
and 10 percent, respectively). Figure 7.3 shows the OC curve and the producers and
consumers risks. Management can adjust the risks by changing the sample size.
Fig. 7.3: The OC Curve for Single-Sampling Plan with n=60 and c = 1
The results are plotted in Figure 7.5. They demonstrate the following principle: Increasing c
while holding n constant decreases the producers risk and increases the consumers risk.
The producer of the mufflers would welcome an increase in the acceptance number because it
makes getting the lot accepted by the consumer easier. If the lot has only 1 percent defectives
(the AQL) with a sample size of 60, we would expect only 0.01(60) = 0.6 defect in the
sample. An increase in the acceptance number from one to two lowers the probability of
finding more than two defects and, consequently, lowers the producers risk. However,
raising the acceptance number for a given sample size increases the risk of accepting a bad
lot. Suppose that the lot has 6 percent defectives (the LTPD). We would expect to have
0.6(60) = 3.6 defectives in the sample. An increase in the acceptance number from one to two
increases the probability of getting a sample with two or fewer defects and, therefore,
increases the consumers risk. Thus, to improve Noise Kings single-sampling acceptance
plan, management should increase the sample size, which reduces the consumers risk, and
increase the acceptance number, which reduces the producers risk. An improved
combination can be found by trial and error using Table G.
The following table shows that a sample size of 111 and an acceptance number of 3 are best.
This combination actually yields a producers risk of 0.026 and a consumers risk of 0.10 (not
shown). The risks are not exact because c and n must be integers.
The analyst can calculate AOQ to estimate the performance of the plan over a range of
possible proportion defectives in order to judge whether the plan will provide an acceptable
degree of protection. The maximum value of the average outgoing quality over all possible
values of the proportion defective is called the average outgoing quality limit (AOQL). If
the AOQL seems too high, the parameters of the plan must be modified until an acceptable
AOQL is achieved.
Step 3: Identify the largest AOQ value, which is the estimate of the AOQL. In this example,
the AOQL is .0.0155 at p = 0.03
Illustration
An inspection station has been installed between two production processes. The feeder
process, when operating correctly, has an acceptable quality level of 3 percent. The
consuming process, which is expensive, has a specified lot tolerance proportion defective of 8
percent. The feeding process produces in batch sizes; if a batch is rejected by the inspector,
the entire batch must be checked and the defective items reworked. Consequently,
management wants no more than a 5 percent producers risk and, because of the expensive
process that follows, no more than a 10 percent chance of accepting a lot with 8 percent
defectives or worse.
a. Determine the appropriate sample size, n, and the acceptable number of defective
items in the sample, c.
b. Calculate values and draw the OC curve for this inspection station.
c. What is the probability that a lot with 5 percent defectives will be rejected?
The observations are plotted on a Basic Control Chart as shown in the next slide.
When the process remains in control with mean at , the cumulative sum is a random walk with mean zero.
When the mean shifts upward with a value 0 such that > 0 then an upward or positive drift will be developed in
the cumulative sum.
When the mean shifts downward with a value 0 such that < 0 then a downward or negative drift will be developed
in the CUSUM.
Illustration: CUSUM
Tabular CUSUM
The tabular CUSUM works by accumulating deviations from (the target value) that are above the target with one
statistic C+ and accumulating deviations from (the target value) that are below the target with another statistic C -.
These statistics are called as upper CUSUM and lower CUSUM, respectively.
where 1 denotes the new process mean value and and indicate the old process mean value and the old process
standard deviation, respectively. Then, K is the one-half of the magnitude of shift.
If either Ci+ and Ci+ exceed the decision interval H, the process is considered to be out of control. A reasonable value
for H is five times the process standard deviation, H=5.
Illustration: Tabular CUSUM (Missing Eqn )
Concluding Remarks
Control Limits
If denotes the target value of the mean used as the center line of the control chart, then the three-sigma control
limits for Mi are
The control procedure would consists of calculating the new moving average Mi as each observation xi becomes
available, plotting Mi on a control chart with upper and lower limits given earlier and concluding that the process is
out of control if Mi exceeds the control limits. In general, the magnitude of the shift of interest and w are inversely
related; smaller shifts would be guarded against more effectively by longer-span moving averages, at the expense of
quick response to large shifts.
Illustration
The observations xi of strength of a cotton carded rotor yarn for the periods 1i30 are shown in the table. Let us
set-up a moving average control chart of span 5 at time i. The targeted mean yarn strength is 4.5 cN and the standard
deviation of yarn strength is 0.5 cN.
Data
Calculations
The statistic Mi plotted on the moving average control chart will be for periods i5. For time periods i<5 the average
of the observations for periods 1,2,,i is plotted. The values of these moving averages are also shown in the table.
Control Chart
Conclusion
Note that there is no point that exceeds the control limits. Also note that for the initial periods i<w the control limits
are wider than their final steady-state value. Moving averages that are less than w periods apart are highly
correlated, which often complicates interpreting patterns on the control chart. This is clearly seen in the control
chart.
Comparison Between Cusum Chart and MA Chart
The MA control chart is more effective than the Shewhart control chart in detecting small process shifts. However, it
is not as effective against small shifts as cusum chart. Nevertheless, MA control chart is considered to be simpler to
implement than cusum chart in practice.
As a rule of thumb, should be small to detect smaller shifts in process mean. It is generally found that 0.050.25
work well in practice. It is also found that L=3 (3-sigma control limits) works reasonably well, particularly with
higher values of . But, when is small, that is, 0.1, the choice of L between 2.6 and 2.8 is advantageous.
Illustration
Let us take our earlier example of yarn strength in connection with MA control chart. Here, the process mean is
taken as =4.5 cN and process standard deviation is taken as =0.5 cN. We choose =0.1 and L=2.7. We would
expect this choice would result in an in-control average run length equal to 500 and an ARL for detecting a shift of
one standard deviation in the mean of ARL=10.3. The observations of yarn strength, EWMA values, and the control
limit values are shown in the following table.
Table
Graph
Conclusion
Note that there is no point that exceeds the control limits. We therefore conclude that the process is in control.
References
1. Grant, E. L. and Leavenworth, R. S., Statistical Quality Control, Tata McGraw Hill Education Private Limited,
New Delhi, 2000.
2. Montgomery, D. C., Introduction to Statistical Quality Control, John Wiley & Sons, Inc., Singapore, 2001.
3. R.C Gupta, Statistical Quality Control, Khanna Publishers, New Delhi, 2005