Chapter 11

Published on January 2017 | Categories: Documents | Downloads: 60 | Comments: 0 | Views: 845
of 70
Download PDF   Embed   Report

Comments

Content

1

Statistics for Quality

Objectives
• To understand the term “quality” in general and its relationship with
statistics.
• How to achieve quality improvement by reducing the variation, and following systematic quality improvement methods such as Shewhart-Deming
cycle, Six Sigma cycle etc.
• To understand the importance of statistical thinking and the role of several
simple statistical tools for use at the shop floor to engage everyone in an
organisation to improve quality.
• To understand the distinction between common and special causes of variation, formation of rational subgroups, and avoidance of process tampering.
• To appreciate the role of Design of Experiments (DOE) for reducing common cause variation and measurement of process capability.
• To understand the methodology of Shewhart control charting for process
monitoring.
¯ R and S variables control charts for Phase I and II, and
• To implement X,
understand their construction.
• To implement p and c control charts, and understand their construction.
• To comprehend the role of sampling inspection for product assurance,
Operating Characteristic curves and quality levels.

1

2

Role of Statistics in Quality

Quality and its management played a crucial role in human history. Managing
quality was important even for ancient civilisations. Standardisation was recognised as the first step towards quality. In ancient Rome, a uniform measurement
system was introduced for manufacturing bricks and pipes; and building regulations were in force. Water clocks and sundials were used in ancient Egypt and
Babylon (15th century BC) even though they were not terribly accurate. The
Chinese Song Dynasty (10th century) even mandated the control of shape, size,
length, and other quality factors of products in handicrafts using measurement
tools, such as carpenter’s squares.
The industrial revolution began in the United Kingdom during the 18th
century and then extended to US and other countries. Quality has become
harder to manage due to mass production. Mass production was achievable
by the division of labour and the use of machinery. In such a production line,
workers performed repetitive tasks in a cooperative way using machinery. This
resulted in huge productivity gains. But the number of factors and variables
affecting the quality of a product in a mass production line were also numerous
when compared to the production of a single item by an artisan who did all work
from start to end. Division of labour for mass production also took away the
pride of workmanship. Hence quality suffered in the production line and quality
monitoring became an essential activity. Due to mass manufacture, engineers
were forced to look beyond using standardised measurements. The causes of
quality variation were numerous and hence statistical methods were needed for
quality monitoring and assurance.
Prof Water Shewhart and Harold Dodge implemented statistical methods
for quality in the mid twenties in USA. The Second World War was the main
catalyst for the extensive use of statistical quality control methods for improving
America’s war time production. Certain statistical methods were even classified
as military secrets. Dr. Kaoru Ishikawa, a well-known Japanese quality philosopher speculated that the Second World War was won by quality control and by
utilisation of statistical methods. The western industries could not sustain their
achievements in quality mainly due to the failure of management. The Japanese
success in the quality front in the late half of the last century can be partly attributed to the wider use of some simple statistical tools together with more
advanced ones such as experimental designs. A word of caution! Quality problems can only be partly solved by statistical methods. For achieving excellence
in quality, company-wide participation, customer focus, good management etc
are important. In the last three decades, many companies in both developed
and developing countries embraced the concept of total quality management
which evolved from the humble beginning namely using simple statistical tools
at the shop-floor.

2

2.1

What is quality?

The term quality is understood in different ways in different contexts. For
some, quality means excellence in all aspects. But we cannot ignore the price
or affordability aspect and this may be a major limiting factor for achieving
excellence. For example, an expensive luxury model car having several ‘extra’
features cannot be compared to a more affordable car while ignoring the price
factor. It is inappropriate to declare that the cheaper car as of ‘poor’ quality
compared to the luxury car, when the prices are not in the same range.
Some define quality to mean ‘fitness-for-use’. For some products, the performance or use characteristics are important. For example, we may compare the
cleaning performance of two laundry detergents. How safe is a brand of detergent to the users or environment? This leads to a dimension of safety associated
with the detergent quality. Does the detergent affect the life of our garments?
Here we think of the durability aspect of the fabric quality. Does the detergent
affect the colour of the fabric? Here we think of the visual appeal or aesthetic
dimension of quality. The ATM machines make cash available 24 hours a day
- a dimension of availability. Banks are concerned about the functioning of the
ATM machine without a failure or repair- a dimension of reliability associated
with the ATM machine quality. Is it easy to service it in the event of malfunction or perform routine maintenance? Here we consider the serviceability and
maintainability aspects of the ATM machine. Thus we see that quality is more
a latent concept that involves (i) performance, (ii) reliability, (iii) durability,
(iv) serviceability, (v) aesthetics and (vi) features etc. Sometimes quality is
simply understood by the brand image of the product (perceived quality), or
how far it conforms to the Standards or customer specifications. We require an
operational definition for quality rather than simply conceptualising it from a
long list of its descriptors.
Many published national and international Standards define quality as the
totality of features and characteristics of a product or service that bear on its
ability to satisfy given or implied needs. The features and characteristics referred to in the above definition may be of several types. Physical characteristics
are those continuous variables such as length, weight, voltage, viscosity, etc. The
feature variables are usually attribute in nature such as sensory ones like taste,
appearance, colour and smell. Certain quality characteristics such as reliability,
maintainability, serviceability are time dependent. The phrase given needs in
the definition requires a clear list of (customer) needs to be identified. It is obvious that all the needs of the customer of a product or service, including safety,
design, aesthetics, etc, should be listed in the set of given needs. The price or
affordability of the product or service should also receive major consideration.
The performance characteristics should receive priority if price is not a limiting
factor. Societal needs are also to be be considered in addition to customer needs
in certain cases. How far does this set of characteristics and features meet the
given needs? That is, the ability to satisfy the (customer) needs is the quality
built into the product or service through the features and characteristics. This
is shown in Figure 1.

3

‘totality of features
and characteristics’
‘given needs’

Quality

Figure 1: A View of Quality

Suppose that we would like to understand the ‘quality’ of a brand of blackcurrant nectar bottled by a company. Let us suppose that the following features
and characteristics are identified:
Features Colour, appearance, smell, flavour, packaging and labelling etc
Characteristics Relative density at 20˚C, content of wine acid, alcohol, acetic
acid, vitamin C, microbial attributes etc
How to measure the above? Let the analytical measures of the characteristics
and features, called quality measures, be as given below:
1. Colour rating scale 1 to 5 (5 being excellent)
2. Appearance rating scale 1 to 5 (5 being excellent)
3. Smell rating scale 1 to 5 (5 being excellent)
4. Flavour rating scale 1 to 5 (5 being excellent)
5. Density measurement by laboratory methods
6. Acid content measurement by laboratory methods
7. Alcohol measurement by laboratory methods
8. Vitamin C measurement by laboratory methods
9. Packaging etc visual inspection
The ‘totality’ of the above features and characteristics is expected to satisfy
the ‘needs’. Hence a list of customer needs should be identified by surveying the
customers. Certain characteristics such as microbiological characteristics partly
represent ‘societal’ needs. How far the product features and characteristics meet
these needs determines the quality of the black current nectar.
4

In practice, we often use the actual measurement of the characteristic(s) of
interest as a quality measure. Statistical methods can be employed to identify
the important characteristics that are highly correlated with the latent variable
quality. The fraction of nonconforming product is also adopted as an (inverse)
measure of quality. Such fraction nonconforming measures will also be called
quality levels. It is also possible to develop a demerit quality score by rating
methods, etc. Basically we will employ one or more quality measures whose
major function is to provide analytical information on the ‘quality’ of a product
or service.

2.2

Understanding and reducing variation

Quality philosophers such as Dr. Genichi Taguchi define quality as the loss a
product causes to society after being shipped, other than any losses caused by
its intrinsic function. Variation from the target of a quality characteristic may
be caused by the uncontrollable factors known as noise, such as
• Outer noise due to humidity, temperature, vibration, dust etc.
• Inner noise due to wear and deterioration.
• In-between noise due to the material, worker etc.
The strength of these noises largely determines the amount of variability
from the target and directly impacts on controllable process factors or parameters such as increasing or decreasing speed, temperature etc. Variability can
be defined and understood only in statistical terms. Hence the use of statistical
methods becomes important for reducing the variability or improving quality.
Montgomery (1996, p.4) defines quality as one which is inversely proportional
to variability. In other words, ‘quality improvement’ requires reduction of variability in processes and products.
2.2.1

Process Optimisation and Robustness

Experimental design is a broad area of statistics having its roots in agricultural
experiments. Many of the key terms of experimental design such as plot, treatment, etc still have an agricultural connotation. Nevertheless, the application
of this branch of statistics is not just limited to agriculture.
The design of experiments (DOE) is extremely useful in identifying and
managing the key variables affecting the quality characteristic(s). In industrial
experiments, the controllable input factors will be systematically varied and the
effect of these factors on the response variables will be observed. The variation in
the response quality characteristic(s) will be understood using statistical models
and the conclusions will to provide an optimum strategy for the actual mass
manufacture. Hence the design of experiment is an off-line quality improvement
tool.
A product is called robust if it performs equally well in good and bad conditions of use. For example, a robust detergent powder is expected to achieve good
5

results whether the water is hard or soft and cold or hot. Robust experimental
designs identify the optimum mix of controllable factor levels which produces a
response robust to external noise factors.
2.2.2

Statistical Process Control

Statistical process control (SPC) is the methodology for monitoring and optimizing the process output, mainly in terms of variability, and for judging when
changes (engineering actions) are required to bring the process back to a ”state
of control”. This strategy of control differs from the engineering process control
(EPC) where the process is allowed to adapt by automatic control devices etc.
In other words SPC techniques aim to monitor the production process while
EPC is used to adjust the production process.
2.2.3

Sampling Inspection

Sampling inspection or acceptance sampling is a quality assurance technique
where decisions to accept or reject manufactured products, raw materials, services etc are taken on the basis of sampling inspection. This method provides
only an indirect means for quality improvement. Calibration of instruments
to tackle measurement errors is done using statistical methods which aim to
provide measurement quality assurance.

3

Shewhart-Deming cycle of process improvement

Prof Walter Shewhart (1931) who invented the control chart technique and
regarded as the father of the SPC proposed the following three postulates from
an engineering view point:
1. “All chance systems of causes are not alike in the sense that they enable
us to predict the future in terms of the past”.
2. ‘Systems of chance causes do exist in nature such that we can predict the
future in terms of the past even though the causes be unknown. Such a
system of chance is termed constant.”
3. “It is physically possible to find and eliminate chance causes of variation
not belonging to a constant system.”
The above three postulates may appear unclear in the first reading. The
following paragraphs explain them, and then show how they lead to SPC and
other procedures.
A production process is always subjected to a certain amount of inherent
or natural variability caused by a number of process and input variables. This
stable system of chance causes, known as common causes belong to the process.

6

The application of industrial experimentation, proper screening actions to use
the appropriate inputs and other pre-production planning activities are to ensure that the variability in the quality characteristic caused by the process and
input variables is minimal. In other words, the variability expected in the actual
production process should be largely ‘error’ or natural or inherent variability.
Whether the rate of this ‘error’ variability is maintained constant over time,
across various machines etc, should be monitored. There are always common
factors that are inherently ‘designed’ into the process and they continuously affect the process. They produce roughly the same amount of variation but hence
predictable. The variation produced by common causes is sometime referred to
as noise, because there is no ‘real’ change in process performance. Noise cannot
be traced to a specific cause, and it is therefore, although predictable, it is either
unexplainable or uncontrollable.
If the variability in the process is due to identifiable sources such as low
yield raw material, improper machine function, wrong methods, etc, the process
will be operating at an unsatisfactory level. Such sources of variability, which
are preventable, are called assignable or special causes of variation. Assignable
causes lie outside the process, and they contribute significantly to the total variation observed in performance measures. The variation created by assignable
causes is usually unpredictable, but it is explainable after they have been observed.
The nature of common cause variation lends itself to the application of the
statistics, while the variation produced by assignable causes does not. So we
characterize the inherent process variation and develop tools to “predict’ the
presence of special or assignable causes on statistical grounds.
SPC includes many statistical tools to warn the presence of special cause(s)
of variability in a production process. The actual elimination or correction of the
process variable causing extra variability is basically an engineering function.
Hence it is necessary for the technical person to be familiar with the statistical
techniques and for the statistician to have some knowledge of the production
processes. It is also important that it should be economically feasible to eliminate any special cause of variation.

3.1

State of Statistical Control

Although both common and assignable causes create variation, common causes
contribute to ‘controlled’ variation while assignable causes contribute to ‘uncontrolled’ variation. Shewhart explained the term controlled variation as follows.
“A phenomenon will be said to be controlled when, through the use of past experience, we can predict, at least within limits, how the phenomenon may be
expected to vary in the future. Here it is understood that prediction within limits
means that we can state, at least approximately, the probability that the observed
phenomenon will fall within given limits.” In other words, a general probability law will apply when the process is subjected to only common causes. In
particular we can state that the current state-of-the-art production involves a
constant amount of variability due to common causes.
7

A production process is said to be in a state of statistical control if
the only variability present is due to common causes, and all special
causes are absent i.e. a constant common cause system represents
the process. The special or assignable causes will increase the variability
beyond the level permitted by the common or chance causes. Such an increase
in variability due to special causes can be detected using the probability laws
governing the stable state of control. This is explained below:
Variation due to special causes are bound to occur over time due change of
raw material or operatives, sudden machine failures, etc. Therefore the process
is usually sampled over time, either in fixed or variable intervals. The presence
of special causes can be monitored by considering a control statistic such as
¯ of a sample of units taken at any given time. The approximate
the mean X
probability distribution of the control statistic can then be used to define a
range for the inevitable (and hence allowable) common cause variation, known
as control limits. This allowable variation can also result in false alarms. That
is, even when no special causes are present, we may be forced to look for the
presence of special causes. This false alarm probability is usually kept low by
using a signal rule for special causes.

3.2

Process Tampering

Common cause variability is often the result of uncontrollable variables representing the current state of the art of production. Without understanding the
nature of variability permitted by the common causes, if the production process
is ‘tampered’ by unnecessary interventions, then the variability in the quality
characteristic will actually increase. It is important that unnecessary process interventions such as tool changes etc should not become another source of ‘special
cause’. Deming used to demonstrate this concept using a demonstration called
‘Funnel Experiment’ which is briefly described below:
As shown in Figure 2, a funnel is mounted on a stand and the spout is
adjusted towards a target. A marble is then dropped through the funnel and
the final resting position is noted. The distance between the target and the final
resting place represents the random variation. Let us suppose that we do not
adjust the funnel position and simply drop the marble several times and note the
resting positions representing the random variation. Let us also consider certain
additional rules or strategies which represent process intervention or adjustment
actions. A set of such rules used by Deming for his funnel adjustment including
the strategy of not adjusting the funnel (called Rule 1) are summarised below:
Rule 1 The funnel remains fixed, aimed at the target.
Rule 2 Move the funnel from its previous position a distance equal to the
current error (location of drop), in the opposite direction.
Rule 3 Move the funnel to a position that is exactly opposite the point where
the last marble dropped, relative to the target.

8

Rule 4 Move the funnel to the position where the last marble dropped.
Figure 3 shows the variability involved with the 4 rules using 400 simulated
standard normal random variables (X,Y) targeted at (0,0). It is clear from the
Deming’s funnel demonstration that intervention in a production process should
not be made unnecessarily if the process is already stable or in control. The
process stability must be monitored statistically (based on probability laws). If
this is done, then the variation in the process output is reduced by avoiding
unnecessary process interventions or process tampering.

Figure 2: Deming’s Funnel Demonstration

3.3

Shewhart-Deming cycle

The plan-do-study-act (PDSA) cycle of quality improvement, also known as
the plan-do-check-act (PDCA) cycle, is largely the contribution of Shewhart
and Deming for quality problem solving and learning. The four phases of this
Shewhart and Deming cycle of quality improvement are the following:
Plan This phase involves developing a hypothesis and determining what resources and changes are needed to solve the problem in hand.
Do This phase largely involves experimental action on a small scale to implement what is planned or hypothesised in Phase 1.
Study/Check After the conduct of the experiment, the outcomes are analyzed.
The main focus here is on learning from the experimental results.
9

Funnel Experiment of Deming - Rule 2
Funnel Moved the Distance of the Current
Error in the Opposite Direction

4

4

3

3

2

2

1

1

0

X

X

Funnel Experiment of Deming - Rule 1
Funnel Remains Fixed

0
-1

-1

-2

-2

-3

-3
-3

-2

-1

0

1

2

3

4

-4

5

Y

-5

Funnel Experiment of Deming - Rule 3
Bow Tie Effect- Funnel moved opposite
last marble point relative to the target

-3

-2

-1

0

1

2

3

4

5

Y

40

15
10

30

5

20

0

10

X

X

-4

Funnel Experiment of Deming - Rule 4
Random Walk- Move to last marble position

-5

0

-10

-10

-15

-20
-20

-10

0

10

20

-30

Y

-20

- 10

Y

Figure 3: Simulated Results of Deming’s Funnel Experiment

10

0

Act Based on the learning in the earlier phase, action is taken to implement
the improvement strategy on a large scale. No improvement is final in
nature and hence the cycle restarts at the first (Plan) phase.

ACT

PLAN

CHECK

DO

Figure 4: PDCA cycle

Quality problem solving is rarely a one-shot exercise, and hence multiple
cycles of PDSA will be needed (see Figure 4). For scientific problem solving,
the well known deduction-induction cycle is continuously applied for discovering
new truths. Hence the PDCA cycle is viewed as the modified way of using the
scientific method for solving quality problems.

4

Statistical Thinking for Quality

It is not uncommon to act on information based on past experience, perceptions
or anecdotal evidence. For example, assume that a friend of yours bought a
house and made a 15% capital gain within one year, and tells you that it is a
good idea to buy a property based on his past experience. Are we supposed to
act on his advice? The answer lies in statistical thinking and data analysis. Most
financial markets react to several micro and macro events but the individual
reactions to such events are all not the same and often the scale of the market
reactions can be excessive. For example, “noise trading” is common in stock
markets. The lack of statistical thinking exists in all walks of life.
The American Society for Quality (ASQ, www.asq.org) Statistics Division
(2004) publication Glossary and Tables for Statistical Quality Control suggests
the following definition:
“Statistical Thinking is a philosophy of learning and action based on the
following fundamental principles:
• All work occurs in a system of interconnected processes,
• Variation exists in all processes, and
• Understanding and reducing variation are keys to success.”
11

The above definition is explained briefly in the following paragraphs:
We must recognise that any response or output is caused by variables involved in an interconnected process. The factors or variables causing output
variations often interact and cannot be thought to be independent of each other
(see Figure 5). Hence the first task must be to understand the structure of the
interconnected process.
Interconnected
Process
Inputs
(from
suppliers)

Outputs
(for
customers)

Figure 5: Process flow and interconnectivity

Everything (manufacturing or non-manufacturing) must be regarded as a
process, and there are always variations. In a manufacturing process, variation
is caused by machines, materials, methods, measurements, people, and physical
and organizational environment. In non-manufacturing or business processes,
people contribute a lot to the total variation in addition to methods, measurement and environment.
Data should guide decisions. That is, statistical models based on data help
us to understand the nature of the variation. The reduction of variation is done
by actions such as eliminating special causes or designing a newly improved
system with smaller variability.
Reverting to the house price example, we first recognise that the change in
house prices is not an unconnected or isolated event but the outcome realized
from an interconnected process. The process leading to house price variation
involves a number of interconnected variables such as the mortgage rate, inflation, wages and employment, migration etc. If we build a suitable statistical
model, then we may recognise that 15% increase in house prices in a short period is rather special rather than common. In order to reduce the house price
variation, special cause factors (such as dominance of short term speculative investors or sudden uncontrolled migration etc) must be addressed appropriately.
Reducing the common cause variability in house prices require planned housing
development for meeting the needs of the population.

4.1

Six Sigma Methodology

In the mid 1980, Motorola Corporation faced stiff competition from competitors
whose products were of improved quality. Hence a resolution was made to
12

improve the quality level to 3.4 DPMO (defects per million opportunities) or
below. That is, the resolution was to keep the process variation one-sixth of
the variation allowed by the upper or lower specifications. The low defect level
was achieved by the Six Sigma process management model in Motorola. The
Six Sigma model can be viewed as an improvement over the Shewhart-Deming
cycle. This management methodology is highly data driven and involves the
following steps summarised by the acronym DMAIC (define, measure, analyze,
improve, control).
Define Identify the process or product that needs improvement. Benchmark
with key product or process characteristics of other market leaders.
Measure Select product characteristics (dependent variables); Map the processes; Make necessary measurement and estimate the process capability.
A methodology known as the Quality function deployment (QFD) is used
for selecting critical product characteristics.
Analyse Analyze and benchmark the important product/process performance
measures. Identify the common factors for successful performance. For
analyzing the product/process performance, various statistical and basic
quality tools will be used.
Improve Select the performance characteristics which should be improved.
Identify the process variables causing the variation, and perform statistically designed experiments to set the improved conditions for the key
process variables.
Control Implement statistical process control methods. Reassess the process
capability and revisit one or more of the preceding phases, if necessary.
Statistical thinking and tools play an important role in all the DMAIC
phases. It is important to note that the variables causing quality must be
identified, experimented and improvements are achieved and held.
Quality is the business of everyone in an organisation. Hence employee
training in the use technical tools and problem solving is an integral part of
the Six Sigma quality management model. Trained employees were even given
martial arts titles “black belts” and “master black belts” depending on their skill
levels and experience! You may be surprised to know that Motorola saved several
billion dollars using Six Sigma methodology. From early nineties, Six Sigma
methodology is being adopted by many multinational companies for achieving
quality and hence profitability.
To encourage statistical thinking in the shop floor, and to train in management methodologies such as Six Sigma, several EDA tools are used. Simple
EDA tools such as histograms, scatter plots, boxplots etc are extremely useful
for understanding a production process.

13

4.1.1

Histogram for Process stability

Assume that certain machining operation is done on a six-head Bullard. Each
head acted as a separate machine and tools are set in slides which can be moved
in or out when adjustment is required. Process adjustments are done by operators such as changing tool etc. For certain dimensional characteristic, assume
that the histogram shown in Figure 6 was drawn with extensive past data. The
bimodal shape in the histogram suggests that that the six heads are not the
same. So actions can be taken to train the two operators to standardise the
machine adjustment actions.

0

10

20

Density

30

40

50

Histogram of dimension

1.90

1.91

1.92

1.93

1.94

dimension

Figure 6: Histograms for Judging Process Stability

Specification limits are limits defined to represent the extreme possible values
of a quality characteristic for conformance of the individual unit of the product.
For example the minimum and maximum for the dimensional characteristic
may be externally fixed as 1.91cm and 1.93cm respectively. We will call the
minimum value as the lower specification limit (LSL) and the maximum value
as the upper specification limit (USL). A quality characteristic may have only a
single specification limit (LSL or USL) or both. Specification limits are fixed on
technical grounds and the actual production should be well contained within the
specification limits to prevent production of nonconforming or defective items.
Histograms are therefore useful to graphically assess whether the production
process is capable of meeting the specifications. The above histogram shows that
14

the process is not meeting the LSL and the USL conditions and a good fraction
of the production must be nonconforming. This high level of nonconformance
calls for variability reduction. Separate histograms for each head with overlaid
specification limits may be drawn. They will be useful to graphically assess the
process capability of each head.
4.1.2

Check Sheet

A check sheet is a simple device for data collection, and summarising all (historical) defect data on the quality characteristic(s) against time, machine, operatives, etc. It helps one identify trends or meaningful results in addition to
its role in proper record keeping. While designing a check sheet, it is necessary
to clearly specify the following:
1. Quality measure
2. Part or operation number
3. Date
4. Name of analyst/sampler
5. Brief method of sampling/data collection
6. All information useful for diagnosing poor quality
The design of a check sheet depends on the requirements of data collection.
For example, the check sheet shown as Figure 7 includes the time of sampling
and the specification limits for a can filling operation.
There is no definition for a check sheet and it can be designed in various
ways dictated by our requirements. It can also be designed to record several
factors and responses for industrial experimentation purposes.

4.1.3

Defect Concentration Diagram

A defect concentration diagram, also known as a location check sheet, is picture
of a unit, showing all relevant views and the various types of (apparent) defects.
It helps to analyse whether the location of the defects on the unit (such as
position of oil leakage in a container) gives any useful information about the
causes of defects. When defect data are recorded on a defect location check
sheet over a large number of units, patterns such as clustering may be observed.
This is helpful for identifying and removing the source of defects. Figure 8
provides a defect location check sheet for a painting operation.

15

XYZ Industries
PQRS.
Can Filler Machine: AAAA
Shift (circle): I II III

Date:
Specification: 991ml Minimum

Operator:________________

Volume (ml)

Hour
1

Hour
2

Target: 1000 ml

Hour
3

Hour
4

Hour
5

Hour
6

under 980
980 - 985
986 - 990
991 - 995
996 - 1000
1001 - 1005
1006 - 1010
over 1010
Steps:
1. Check five cans per hour for 8 hours
2. Place a tally mark in the proper box after measurement
Remarks:

Figure 7: A typical Check sheet

16

Hour
7

Hour
8

Total

Defect Types
Scratches
Di rt
Thick
Thin
Bubbl e

Front View
Figure 8: A Location Check Sheet

Ideally we would expect the defects to be randomly (uniformly) distributed.
The distance between defects of the same type, as well as the distance between
defects of different types, may be used as a numerical measure for clustering
and dependence. If the location check sheet is not clearly indicating assignable
patterns, then the relevant data can be analyzed comparing the real data with
simulated uniform random numbers.
4.1.4

Pareto diagram

This graphical tool derives its name from the Italian economist Pareto, and was
introduced for quality control by Dr. Juran, a famous quality Guru. Juran
found that a vital few causes lead to a large number of quality problems, while
a trivial many cause only a few problems.
First the causes are ordered in descending order of interest. The characteristic of interest may be the percentage cause of nonconforming items, economic
losses, etc. The ranked causes are then shown as bars where the bar height
represents the characteristic of interest. The cumulative percentages are also
shown in the diagram. A sample Pareto diagram is shown in Figure 9. The
Pareto diagram is similar to a bar diagram. In both tools, bars represent frequencies. In a bar chart they are not arranged in descending order, while they
are for a Pareto diagram.
The Pareto chart is also successfully used in non-manufacturing applications
such as software quality assurance. A software package is usually tested for
errors before being released commercially. A software package will consist of
several modules and subroutines written by several persons and upgraded over
time. If the software fails on a test, it is possible to track which of the modules
actually caused the failure. Repeated test results when displayed in the form of

17

a Pareto chart will identify those modules which are to be simplified by breaking
complex subroutines into smaller ones etc.

300



100%

Pareto Chart for defect



250



50%

Error frequency

200

150



25%

100

Cumulative Percentage

75%



0%

50

0
contact num.

price code

supplier code

part num.

schedule date

Error causes

Figure 9: Pareto Diagram

4.1.5

Cause and Effect Diagram

This diagram, also called a fish-bone diagram, was introduced by the Japanese
Professor Dr. Kaoru Ishikawa (hence it is also known as an Ishikawa diagram).
The cause and effect diagram provides a graphical representation of the relationship between the probable causes leading to an effect. The effects are often
the vital few noted in a Pareto chart. The causes are generally due to machines,
materials, methods, measurement, people and environment. The diagram can
also be drawn to represent the flow of the production process and the associated
quality related problems during and between each stage of production. A sample cause and effect diagram is shown in Figure 10 which relates to the cause
for printed circuit board surface flaws.
Very often, it may be necessary to establish a relationship between a characteristic of interest and a major cause quantitatively. If this is done using experimental designs, the cause may be shown enclosed in a box. If only empirical
evidence exists, not fully supported by data, then the cause may be underlined.
The main advantage of a cause and effect diagram is that it leads to early detection of a quality problem. Often, a brainstorming session is required to identify

18

the sub-causes. The disadvantage of the cause and effect diagram is its inability
to show the interactions between the problem causes or factors.
A cause and effect diagram is often used after a brainstorming session. One
other diagram used to organise the ideas, issues, concerns, etc of a brainstorming
session is the affinity diagram. This diagram groups the information based on
the natural relationships between the ideas, issues, etc. A tree diagram is one
which breaks down a subject into its basic elements in a hierarchal way; it can
be derived from an affinity diagram. The basic objective of these diagrams is
to organise the relationships in a logical and sequential manner.

Cause−and−Effect diagram
Measurements

Materials

Micrometers
Microscopes

Personnel

Alloys

Shofts

Lubricants

Inspectors

Supervisors

Suppliers

Training
Operators

Surface Flaws
Sockets

Moisture
Condensation

Environment

Angle

Bits

Engager

Lathes

Brake

Methods

Speed

Machines

Figure 10: Cause and Effect diagram

4.1.6

Multi-Vari Chart

This chart is used to graphically display the variability due to various factors.
Multi-vari charting is a quick method of analysing the variation and can be
viewed as the EDA tool prior to advanced (nested) ANOVA model of various
factors.
The simplest form of a muti-vari chart displays the variation over a short
span and a long span of time. For example, five consecutive items are taken
from a grinding operation every half hour and the diameter of the items sampled
is measured. The time taken to produce the five items is the short span of time.
Let us use the range, the difference between the largest and smallest observed

19

value, as a measure of variability in this short span of time. These range values
can then be shown over the longer period of the study.
If a multi-vari chart indicates instability in either the short or longer term,
the factors causing the instability must be listed and analyzed further. Note
that all production conditions such as change of raw materials and process
interventions such as tool adjustments etc will be noted on the check sheets.
A cause and effect diagram or a brainstorming session will help to list factors
presumably causing such instability.
4.1.7

Run Chart

A run chart is a particular form of a scatter plot with all the plotted points
connected in some way. This chart usually shows a run of points above and
below the mean or median. Run charts are mainly used as exploratory tools
to understand the process variation. For instance, the stability of a production
process can be crudely judged by plotting the quality measure or the quality
characteristic against time, machine order etc. and then checking for any patterns or non-random behaviour (such as trends, clustering, oscillation, etc) in
the production process.

5

Common Causes and Process Capability

Common cause variations belong to a system. The third postulate of Shewhart
relates to removal of common causes (hence reducing the common cause variations) by changing the basic system itself. Experimental designs discussed in
Chapter 10 are extremely important to achieve reduction in the product variation. This offline line activity of designing of a product commonly employs
statistical designs known as fractional factorial designs, response surface designs
etc. These topics are beyond the scope of a first year course. Hence we will
briefly present a case study reported by Sohn and Park (2006) which appeared
in Quality Engineering journal. A large number of case studies involving factorial designs for reducing variation are available in quality related journals such
as Quality Engineering.
Automotive disc brake systems emit two kinds of noise; one is of lowfrequency “groan” noise, and the other is the high-frequency “squeal” noise.
Brake discs are produced through the sequential steps of cutting, grinding, etc.
After experimentation, Sohn and Park (2006) reported that brake discs and
compressibility of brake pads were identified as the most important output responses related to brake noises. Table 1 shows some of the experimental results
reported by Sohn and Park (2006) for improving the compressibility of brake
pads (under 160 bar pressure µm). The following factors were known to affect
compressibility:
1. No. of cycles of gas emission (Factor A)
2. Temperature of hot forming process, measured in ˚C (Factor B)
20

3. Pressure of hot forming process, measured in N/µm2 (Factor C)
These three factors were crossed to obtain eight different treatment combinations. Compressibility measurements were made for four replications. The
old settings were found not optimal after comparing the mean and standard
deviations of compressibility. Using statistical modeling techniques, the experimental data were analyzed and then the authors found that the Gas Emission
level must be set at 21 Cycles, Temperature at 145˚C and Pressure at 35.3
N/µm2 in order to improve the compressibility as well as reduce its variability.
Note that these predicted optimum levels were not part of the treatments applied during the experimentation but estimated using the appropriate statistical
model. So additional confirmatory experimental runs were made and these runs
confirmed that the model predictions were indeed correct.
Table 1: Experimental Summary for Reducing Compressibility
Treatment

Factor A

Factor B

Factor C

1
2
3
4
5
6
7
8
Old settings

18
22
18
22
18
22
18
22
20

145
145
155
155
145
145
155
155
150

35
35
35
35
45
45
45
45
40

Mean Compressibility
147.3
156
153.6
150.3
154.5
160
155.8
167.3
157.2

SD of Compressibility
4.2
3.8
5.9
4.7
3.6
2.8
5.3
5.8
5.1

Variability in quality characteristics affects quality, and results in nonconforming units when specifications are not met. In other words, the production
process becomes incapable of meeting the specifications. In order to assess,
whether a process is capable of meeting the specifications, process capability
indices are defined.

5.1

Cp index

This index is defined as
Cp =

U SL − LSL


where USL and LSL are respectively the upper and lower specification limits
and σ is the standard deviation of the process characteristic. Six sigma spread of
the process is the basic definition of process capability when the quality characteristic follows a normal distribution. If Cp =1, then the process is just capable
of meeting the specifications, see Figure 11. In reality the process standard deviation σ is estimated and the true distribution may depart from normal. Hence
21

in order to allow for the sampling variability and other assumption violations,
the desired value for the estimated Cp is set 1.33 for existing processes and 1.5
for new processes. If the estimated index is lower than 1.33, it implies that the
process variability is high, and actions must be taken to reduce it.

LSL



µ



USL

Figure 11: Cp =1 scenario

If the quality characteristic has only one specification, either on the lower or
upper side, then the following indices are used.
• CpL =

µ−LSL
(lower


• CpU =

U SL−µ
(upper


specification)
specification)

If the process is centred at µ, then one has Cp = CpL = CpU . For a better
idea of process centring, the index defined next will be used.

5.2

Cpk index

The index Cpk is defined as
Cpk = min{CpL , CpU }.
If Cpk < Cp , it means that the process is not centred. (It will never happen
that Cpk > Cp .) If Cp is high but not Cpk , it means that the process needs
centering. Hence Cp is said to measure the potential capability of the process
whereas Cpk measures the actual capability. Recommended values for CpL , CpU
and Cpk are the same as for Cp . The six sigma methodology aims to achieve a
Cpk of 1.5 to ensure 3.4 DPMO.

22

5.3

Cpm index

Consider the following specification requirements for which two production processes are available.
LSL = 35, USL = 65 and target = T = 50 units.
Let the process parameters and the associated process capability indices be:
Process I µ=50 σ=5.0 Cp =1.0 Cpk =1.0
Process II µ=57.5 σ=2.5 Cp =2.0 Cpk =1.0
The two processes have the same Cpk values but obviously the second process
is not at the target. For Process II, it can be observed that the index Cp is not
equal to Cpk . To have a better indicator of process centering at the desired
target, the following index is used:
Cpm =

U SL − LSL


where τ is the square root of the expected square deviation from the target T
namely
2

τ 2 = E(X − T )

2
2
= E (X − µ) + (µ − T )
2

= σ 2 + (µ − T )

The relationship between the Cp and Cpm indices is as follows:
Cpm = √U SL−LSL
2
=

6 σ +(µ−T )2
Cp
1+δ 2

where δ = µ−T
σ .
The process capability indices are computed in two ways. The first approach
is to obtain the index using an estimate of sigma for the shorter term (for
example, after experimentation). The other approach is to obtain a long term
estimate of the sigma, which is often known after the implementation of the
SPC methods, which will be discussed in later sections.
The interpretation of process capability indices becomes difficult in the following situations:
1. When the process is not in a state of statistical control
2. Non-normal process
3. Correlated process and
4. Inevitable extra variation between different production periods.
Hence caution must be exercised in interpreting the process capability indices.
23

6

Process Monitoring With Control Charts

A control chart graphically displays a summary statistic of the characteristic(s)
of interest for testing the existence of a state of statistical control in the process.
Figure 12 shows a typical control chart configuration for monitoring the mean
of a quality characteristic. Figure 12 also shows certain limits (labelled LCL
and UCL) called control limits. These limits are different from the specification
limits which represent the extreme possible values of a quality characteristic for
conformance of the individual unit of the product. Control limits on a control
chart are fixed using probability laws and are different from the specification
limits. They are not intended for checking the quality of each unit produced
but as a basis for judging the significance of quality variations from time to time
or sample to sample or lot to lot. If all the special causes are eliminated, then
practically all the plotted points will lie within the control limits.

74.010

74.015

Sample Control Chart
UCL

















74.000

Quality Measure

74.005


















73.995









73.990





LCL

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

Group

Figure 12: A Typical Control Chart

6.1

Rational Subgrouping

In the technique of control charting, past data are used for judging whether
the future production will be in control or not. To accomplish this, past data
are accumulated. By a subgroup of observations, we mean one of a series of
observations obtained by subdividing a larger group of observations. By a rational subgroup, we mean classifying the observed values into subgroups such
24

that within a rational subgroup variations are expected to be due to common causes and variations between subgroups may be attributable to special
cause(s). In other words, the subgroups should be such that special causes show
up in differences between the subgroups as against special causes differentiating
the member units of a given subgroup. That is, we would like to see that the
member units of a subgroup are as homogeneous as possible. For example, assume that a machine has four heads operated by the same worker and using the
same raw material. We would like to accumulate data pertaining to the same
machine head to form a subgroup, rather than pooling the data from all the
four machine heads to form the subgroup. Any head-to-head difference will be
reflected in the differences between subgroups. The task of subgrouping requires
some technical knowledge of the production process, particularly the production
conditions and how units were inspected and tested. Subgrouping on the basis
of time is the most useful approach followed in practice. This is because the
process stability tends to be lost over time due to the use of several batches of
raw material(s), changing operatives, etc. The other factors for subgrouping are
batch of raw material, machine, operator, etc. Careful rational subgrouping
is extremely important to ensure the effectiveness of a control chart
and easy identification of assignable causes. The success of the control chart
technique largely relies on proper subgrouping.
For a constant cause system the variation within a subgroup is the same as
the variation across subgroups. Therefore, if the assumption of a constant cause
system is correct, it should be possible to predict the behaviour of statistics such
as sample averages, ranges, and standard deviations, across subgroups based on
the homogeneous variation observed within subgroups. Data from a constant
cause system of variation will display only unexplainable variation both within
and across rational subgroups. The range of variation due to constant causes
will be within predictable statistical limits. Non-random patterns of variation
appearing across the rational subgroups can be treated as the signal for special
causes.
For some processes, uncontrollable factors such as the seasonal nature of
input quality of materials may be involved. Such structural variation will also
be treated as common than special. Proper subgrouping is expected to take care
of such issues. For example, a monitoring procedure for road accidents must
consider the structural variation due to Friday and weekends. The monitoring
procedure should be based on two distinct common cause systems namely (i)
Monday-Thursday and (ii) Friday-Sunday. If the whole week is treated as a
subgroup, the common cause variability may be incorrectly estimated.

6.2

Shewhart Control Charts

Let M be some statistic computed using rational subgroups. We will also call
M a control statistic. Let the mean of M be µM and the standard deviation be
σM . Then the central line (CL), the upper control limit (UCL) and the lower
control limit (LCL) are fixed at

25

UCL = µM + kσM
CL = µM
LCL = µM − kσM
where k is the distance of the control limits from the central line, expressed in
standard deviation units. This configuration, known as Shewhart control chart,
is shown in Figure 13. The estimation of µM and σM and fixing a value for k
are statistical problems.

Upper Control Limit (UCL)

µ M + kσM

kσM
Central Line (CL)
µM

M
kσM
Lower Control Limit (LCL)

1

2

3

4

5

6

µ M - kσM

etc
e

subgroup number
Figure 13: Shewhart Control Chart

The Shewhart control charts can be classified as of variables or attribute
type depending on the type of quality measure(s) adopted (see Table 2).
Shewhart recommended a value of 3 for the control limit constant k. Hence
the control limits are known as 3-sigma limits. If the computed value of the
quality measure for a given subgroup breaches the 3-sigma limits, action will
be initiated to look for assignable causes. Hence, we will also call the 3-sigma
limits as action limits. There are several reasons for using 3 sigma limits. The
important ones are1. Special cause investigation is expensive for most engineering processes
and also time consuming. Hence it is important to avoid the frequency of
false alarms. That is, we want to avoid tampering the process looking for
special causes when there are not any. In order to keep the rate of false
26

Table 2: Control Chart Types
Control statistic M
¯
Average X
Standard deviation S
Range R
Proportion defective p
Number of defectives np
Number of defects c
Defects per unit u

Name of Chart
¯
X−chart
S-chart
R-chart
p-chart
np-chart
count or c-chart
u-chart

Type
Variables
Variables
Variables
Attribute
Attribute
Attribute
Attribute

alarms low, we need to tolerate a wider variation in M . If M has a normal
distribution, the probability of M breaching the 3 sigma control limits is
0.0027, which is very small. Even if the original characteristic of interest
X does not follow a normal distribution, the statistic under consideration M such as the mean will approximately follow a normal distribution
provided the sample or subgroup sizes are large. In fact the central limit
theorem requires the averages be independent random variables, though
not identically distributed.
2. Even if the assumption of normal distribution is not always easy to justify
for the individual characteristic values, 3-sigma control charts for individual values can be used with caution. In a worst case situation (without
the normal or unimodal assumption), three sigma limits will cover only
about 89% of the distribution (due to Chebychev’s inequality, and we will
not worry about the theory here). If the process characteristic follows a
unimodal probability distribution, then the three sigma limits will cover
95% of the distribution, resulting in about 5% false alarm rate (due to a
theoretical result known as Gauss inequality).
3. No process characteristic will follow exactly a theoretical distribution.
Hence it is difficult to calculate the exact false alarm rates. With 3 sigma
limits, it is economical to investigate the process for special causes as
against other limits. The 3 sigma limits were found to work well for many
different types of processes.
In order to compute the control limits, the unknown parameters µM and σM
of the process are estimated using extensive past (or historical or retrospective)
data. This is known as Phase I for setting up the control charts. Phase I data
analysis is intended to obtain “clean” data to represent the process having only
common causes. The EDA tools studied in earlier chapters and the simple QC
tools explained in this Chapter will prove invaluable for Phase I analysis.
Based on Phase I investigation, parameters such as the mean, standard deviation etc of a typical process in control will be hypothesised. Such control charts
based on the hypothesised values will be called standards given charts. These
27

charts will be implemented in Phase II for monitoring the current production.
That is, subgroups will be drawn one by one and plotted on the Standards given
control chart. An example is given in the next section.
The basic signal rule or test for the presence of special causes is that either
the upper or the lower control limit must be breached by a plotted point. This
rule can be supplemented by extra tests based on a series of points for declaring
the presence of special causes. The commonly adopted rules (known as Western
Electric supplementary run rules) are illustrated in Figure 14. Note that the
application all these rules will hugely increase the false alarm rate and only a
selected combination of them must be employed.

7

Variables Control Charts

Variables control charts are used when the quality characteristic is measurable
on a continuous scale. For example, the dimension of a piston ring is measurable
as a continuous variable. In a typical process, there may be several hundred
variables, and only key performance or use characteristics are considered for
control charting using variables charts. The the control statistic M for variables
charts is usually the mean of the quality characteristic. That is, the intention
of control charting is to monitor the process level. For example, the true mean
dimension of the piston ring may change either upward or downward during
the production. Hence the subgroup means are used to monitor the process
¯ chart. This chart will
level, and the resulting chart is known as the Xbar X
be accompanied by either the range (R) chart or standard deviation (S) chart
which will monitor the increase in (within subgroup) variability over time.

7.1

Estimation of Common Cause Sigma

To construct both charts, it is necessary to estimate the mean (µ) and standard
deviation (σ) of the quality characteristic during a state of control (i.e., when
the process is subjected to only common causes) using historical data. The
approach followed in SPC for estimating µ and σ will be explained considering
the following example.
Cotton yarn is produced by a spinning machine (called a ring spinning
frame). This machine will draw slivers of cotton (which are produced by several
preparatory machines) and spins them into yarn of continuous lengths. A typical spinning mill will have a number of spinning frames, each machine having
a number of spindles with the spinning operation done by each of the spindles.
Let us consider the problem of controlling the process of spinning yarn (using
the quality characteristic YARNCOUNT of yarn produced) for a given spinning
frame. The term YARNCOUNT is the number of 840 yard lengths in one pound
of yarn. Obviously, the higher the YARNCOUNT, the thinner the yarn. A unit
for inspection purposes is fixed at a 120 yard length of yarn (called a lea). Lea

28




target
−1σ
−2σ
−3σ


A



B



C

target

C

−1σ

B

−2σ

A

−3σ

Test 1: One point beyond Zone A
(usual signal with action limits)




target
−1σ
−2σ
−3σ



target
−1σ
−2σ
−3σ

B
C
C
B
A

Test 2: Nine points in a row in Zone C and beyond



A



B



C

target

C

−1σ

B

−2σ

A

−3σ

Test 3: Six points steadily increasing or decreasing



A

A
B
C
C
B
A

Test 4: 14 points in a row alternating up and down



A



B



C

target

C

−1σ

B

−2σ

A

−3σ

A
B
C
C
B
A

Test 5: Two out of three points in a
row in Zone A or beyond




target
−1σ
−2σ
−3σ

Test 6: Four out of five points in a
row in Zone B or beyond



A



B



C

target

C

−1σ

B

−2σ

A

−3σ

Test 7: 15 points in a row in Zone C
(above and below central line)

A
B
C
C
B
A

Test 8: Eight points in a row on both
sides of central line with none in Zone C

Figure 14: Supplementary Run Rules
29

testing and measuring instruments are available which will quickly determine
the quality characteristics YARNCOUNT, strength, number of thick and thin
places, etc.
Let the nominal or target YARNCOUNT be 40, i.e. a pound of yarn will give
40(840) = 33600 yards of length. Assume that the lower and upper specification
limits for YARNCOUNT are respectively 39.8 and 40.2.
The mill was sampling five leas from randomly selected spindles from the
spinning frame for testing during a production shift. On some days/shifts samples were not taken. Multiple samples were taken on a few shifts. Table 3
provides the historical data collected by the mill. This Table indicates certain
important process conditions that were noted during sampling. The mill found
that the input for the spinning machine, namely the yarn slivers produced in
the preparatory process, was not uniform during certain periods. Such cases, indicated as ‘input sliver problem’, occurred intermittently. It took some time to
locate the sources of trouble in the preparatory stages and correct this problem.
Samples numbered 17 and 34 are associated with clear (engineering) evidence
that they represent unusual production conditions or measurement problems.
These samples must be dropped. The same is the case with subgroups associated with ‘input sliver problem’ and hence the samples numbered 4, 14, and 21
are also dropped. All cases where there is no strong technical evidence for lack
of control such as sample 25 (casual operator employed) will be included in the
analysis using control charts.
It is also very likely that certain unusual production conditions or special
causes would have existed during the period of data collection which are not
evident in the ordinary course of production operations. A trial control chart
for our Phase I analysis will be used to detect the presence of special causes so
that further technical investigation can be initiated to locate and eliminate the
sources of trouble.
How YARNCOUNT varies during a production shift is important for rational subgrouping and effectiveness of the control charts. More studies must be
done by collecting data in the same shift at different time intervals to understand how other process variables such as interference due to doffing, operator
breaks, maintenance schedules, etc, affect YARNCOUNT. They may suggest
more frequent sampling during a given production shift. One of the common
ways of subgrouping is to use a (small) block of time and allow bit longer time
between subgroups. Frequent sampling and formation of small subgroups is
useful than infrequent sampling and formation of large subgroups.
In order to estimate the true mean (µ) or standard deviation (σ) of the
quality characteristic YARNCOUNT, the historical data will NOT be pooled.
The retrospective data may contain periods dominated by one or more special
causes, and a pooled estimate of the standard deviation can be used only when
the process is known to be in control, i.e. dominated only by common causes.
The Shewhart control charts allow only common cause variation within a subgroup, and any extra variation between subgroups is inadmissible and will be
attributed to the presence of special causes. For any given subgroup i, the usual
standard deviation Si (n-1 in divisor) or the range Ri will be used to estimate
30

Table 3: Retrospective Data for Trial Control Chart
Sample
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

Date
28-09
28-09
28-09
29-09
29-09
29-09
30-09
30-09
04-10
04-10
04-10
05-10
05-10
05-10
06-10
06-10
07-10
07-10
07-10
08-10
08-10
08-10
11-10
11-10
11-10
11-10
12-10
12-10
12-10
13-10
13-10
13-10
13-10
14-10
14-10
14-10
15-10
15-10
15-10

Shift
1
2
3
1
2
3
2
3
1
2
3
1
2
3
2
3
1
2
3
1
2
3
1
1
2
3
1
2
3
1
2
2
3
1
2
3
1
2
3

Obs 1
40.0
40.0
40.0
41.0
40.0
40.0
40.0
40.0
40.1
40.0
40.0
40.0
40.0
40.5
40.0
40.1
40.1
40.1
40.0
40.1
39.0
40.0
39.9
40.0
40.2
40.1
40.1
40.0
40.0
40.0
39.9
40.0
39.9
60.0
40.0
40.1
40.1
39.9
40.1

Obs 2
39.9
40.0
40.0
40.9
40.0
40.0
40.1
40.0
40.0
40.0
40.0
40.0
40.0
41.0
39.9
40.0
NA
40.0
39.9
40.0
38.1
40.0
40.0
40.0
40.0
40.0
40.1
40.0
40.1
40.0
40.0
40.0
40.0
59.9
40.1
40.1
40.0
40.0
40.0

31

Obs 3
40.0
40.1
40.0
41.0
40.1
40.0
40.0
39.9
39.9
40.1
40.0
40.0
40.0
41.0
40.0
40.0
NA
40.1
40.0
40.0
39.0
40.0
39.9
40.0
40.1
40.0
40.1
40.0
40.0
40.1
40.0
40.0
40.1
60.0
40.1
40.0
39.9
40.0
40.1

Obs 4
40.1
40.0
40.0
41.0
40.0
39.9
40.0
40.0
40.0
40.0
40.1
40.0
39.9
40.8
40.0
40.0
NA
39.9
40.1
40.0
39.0
40.1
40.0
40.1
40.0
40.0
40.0
40.0
39.9
40.0
40.1
40.0
40.0
40.1
40.0
39.9
40.0
40.1
40.0

Obs 5
40.0
40.1
39.9
41.0
40.0
39.9
40.0
40.1
40.0
40.0
40.0
40.0
40.0
41.0
40.0
40.0
NA
39.9
40.0
40.1
39.6
40.0
40.0
40.1
40.0
39.9
40.1
40.0
40.0
40.0
39.9
40.1
39.9
40.0
40.0
40.0
40.0
40.1
40.0

input sliver problem

input sliver problem

faulty motor

input sliver problem

casual operative

Yarncount mix up

the true process standard deviation σ. If there are m (say) such subgroups, the
mean of the m subgroup standard deviations (Si values) or ranges (Ri values)
will be used to estimate the process σ. Similarly the mean of the m subgroup
¯ i (say) values) is used to estimate the true process mean µ. Consider
means (X
Table 4 which gives the means, standard deviations and ranges for the YARNCOUNT data. Note that this table omits the samples 4, 14, 17, 21 and 34 and
relates to a total of 34 subgroups only.
Table 4: Subgroup Means, Ranges and Standard Deviations
Old
sample
number
1
2
3
5
6
7
8
9
10
11
12
13
15
16
18
19
20
22
23
24
25
26
27
28
29
30
31
32
33
35
36
37
38
39

Subgroup i

¯i
X

Ri

Si

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34

40.00
40.04
39.98
40.02
39.96
40.02
40.00
40.00
40.02
40.02
40.00
39.98
39.98
40.02
40.00
40.00
40.04
40.02
39.96
40.04
40.06
40.00
40.08
40.00
40.00
40.02
39.98
40.02
39.98
40.04
40.02
40.00
40.02
40.04

0.2
0.1
0.1
0.1
0.1
0.1
0.2
0.2
0.1
0.1
0.0
0.1
0.1
0.1
0.2
0.2
0.1
0.1
0.1
0.1
0.2
0.2
0.1
0.0
0.2
0.1
0.2
0.1
0.2
0.1
0.2
0.2
0.2
0.1

0.0707
0.0548
0.0447
0.0447
0.0548
0.0447
0.0707
0.0707
0.0447
0.0447
0.0000
0.0447
0.0447
0.0447
0.1000
0.0707
0.0548
0.0447
0.0548
0.0548
0.0894
0.0707
0.0447
0.0000
0.0707
0.0447
0.0837
0.0447
0.0837
0.0548
0.0837
0.0707
0.0837
0.0548

The overall mean of the 34 (= m) subgroups is the estimate of µ. Let us
denote the grand or overall mean as X (X double bar). That is

32

m

µ
ˆ=X=

1 X ¯
Xi = (40.00 + 40.04 + . . . + 40.04)/34 = 40.01
m i=1

¯ i is the ith subgroup mean. The process standard deviation σ is estiwhere X
mated as
¯ 4
σ
ˆ = S/c
where S¯ is the average of the subgroup standard deviations, viz.
m

1 X
Si
S¯ =
m i=1
where Si is the standard deviation of the ith subgroup and c4 is a constant
that ensures σ
ˆ an unbiased estimator of σ. That is c4 = E(S)/σ. Hence, the
constant c4 is known as the unbiasing constant. c4 is purely a function of the
subgroup size n and values are given in Table 5. For YARNCOUNT data,


= (0.0707 + 0.0548 + . . . . + 0.0548)/34 = 0.05703.

giving σ
ˆ = 0.05703 / 0.94 = 0.0607.
Table 5: Unbiasing constants for Ranges and Standard Deviations
n
2
3
4
5
6
10
15
20
25

c4
0.7979
0.8862
0.9213
0.9400
0.9515
0.9727
0.9823
0.9869
0.9896

d2
1.128
1.693
2.059
2.326
2.534
3.078
3.472
3.735
3.931

It is also possible to estimate the process sigma using ranges. That is, the
estimator is
¯ 2
σ
ˆ = R/d
¯ is the mean of the subgroup ranges given by
where R
m

X
¯= 1
Ri
R
m i=1
33

and d2 is the unbiasing constant for the range. That is, d2 = E(R)/σ values
are given in Table 11.5 for selected subgroup sizes. For YARNCOUNT data, we
find
¯
R

= (0.2 + 0.1 + . . . . + 0.1)/34 = 0.1324

¯ 2 = 0.1324 / 2.326 = 0.0569. In general (i.e. if subgroup size
yielding σ
ˆ = R/d
n > 2), the range estimate of σ is inefficient compared to the estimate based on
the standard deviation.

7.2

Xbar chart

After estimating the true process level as X and the common cause sigma as
¯
σ
ˆ , the control limits of the X-chart
are established using the above estimated
2
¯
¯
values of µ and σ. Since V(X) = σn , the 3-sigma control limits for the X-chart
are obtained as:
σ
ˆ
µ
ˆ ± 3√
n
¯
Hence the control limits for the X-chart
based on the standard deviation estimate
of σ are:

¯ 4
S/c
LCL = X − 3 √ ,
n

¯ 4
S/c
U CL = X + 3 √ .
n
For YARNCOUNT data, it is easy to see that

LCL = 40.01- 3(0.0607/ 5) = 39.929.

UCL = 40.01+ 3(0.0607/ 5) = 40.091.
¯
Hence the control limits for the X-chart
based on the range estimate of σ are:

¯ 2
R/d
LCL = X − 3 √
n
¯ 2
R/d
U CL = X + 3 √
n



¯
The X-chart
control limits for the YARNCOUNT data are:

LCL = 40.01- 3(0.0569/ 5) = 39.933.

34


UCL = 40.01+ 3(0.0569/ 5) = 40.086.
It is easy to compute the control limits using the control limit formulae
appearing in Table 6 to 8. The table also gives certain constants (A2 , A3 ,
B1 , B2 , B3 , B4 , D1 , D2 , D3 , D4 ) called control limit factors for computing
control limits. For example, Table 11.6 gives the control limits (based on the
¯ estimate of σ) as X ± A2 R
¯ with control limit factor A2 = 0.577 (which is
R

equal to 3/d2 n). The control limit factors are useful for manual computation
of control limits.
¯ control chart
Table 6: Formulae and constants for X
subgroup size
Factor for control limits
n
A
A2
A3
2
2.121
1.880
2.659
3
1.732
1.023
1.954
4
1.500
0.729
1.628
5
1.342
0.577
1.427
6
1.225
0.483
1.287
10
0.949
0.308
0.975
15
0.775
0.223
0.789
20
0.671
0.180
0.680
25
0.600
0.153
0.606
Control limits
For Analysing Past Production for Control (Standards
Unknown):
central line =X
¯ or X ± A3 S¯
control limits = X ± A2 R
For Controlling Quality during Production (Standards
Known):
0

central line =X 0
control limits = X ± Aσ 0 or X ± A2 Rn0

The control limits will be displayed on a time sequence plot or run chart for
¯ i will be plotted against the subgroup
the mean. That is, the subgroup means X
number i with reference lines for the control limits. The overall mean X will
also be placed on the chart to produce the central line. Figure 15 is the resulting
¯
X-chart
for the YARNCOUNT data based on the S¯ estimate of σ.

¯ i ) crossed the control limNone of the plotted points (subgroup means X
its and hence we call the mean YARNCOUNT to be under control. In other
35

UCL

0.10

0.12

S Chart
for yarn

























0.06

YARNCOUNT SD

0.08





































0.00

0.02

0.04





LCL



1

2

3

4

5

6

7

8

9



10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
Group

¯
Figure 15: X-Chart
words, the above chart indicates that there is no significant shift in the mean
YARNCOUNT level for the past production periods.
In order to answer the question whether the variability within the subgroups
is stable, either an R-chart (range chart) or a S-chart (standard deviation chart)
will be used. These charts evaluate the variability within a process in terms of
the subgroup ranges and standard deviations. One of these two charts will
accompany the S-chart for monitoring the variation within the subgroups over
time. We will use Table 11.7 and Table 11.8 for relevant formulae and control
limit factors for the S and R charts respectively.

7.3

S chart

¯
The control limits are given by LCL = B3 Sand
UCL = B4 S¯ with the central
¯
line being S. For the given subgroup size of 5, one finds the factors for control
limits B3 = 0 and B4 = 2.089 from Table 11.7. For YARNCOUNT data, S¯ =
0.05703 and hence
LCL = 0(0.05703) = 0 and UCL = 2.089(0.05703) = 0.119.
These control limits are then placed on a run chart of Si values with a reference
central line for S¯ = 0.05703. Figure 16 is the S chart for YARNCOUNT data:

36

Table 7: Formulae and Constants for S-chart
subgroup size

Factor for control limits

Factor for central
line
c4
0.7979
0.8862
0.9213
0.9400
0.9515
0.9727
0.9823
0.9869
0.9896

n
B3
B4
B5
B6
2
0
3.267
0
2.606
3
0
2.568
0
2.276
4
0
2.266
0
2.088
5
0
2.089
0
1.964
6
0.030
1.970
0.029
1.874
10
0.284
1.716
0.276
1.669
15
0.428
1.572
0.421
1.544
20
0.510
1.490
0.504
1.470
25
0.565
1.435
0.559
1.420
Control limits
For Analysing Past Production for Control (Standards Unknown)
central line = S¯ control limits = B3 S¯ and B4 S¯
For Controlling Quality during Production (Standards Known):
central line = c4 σ 0 control limits = B5 σ 0 and B6 σ 0

UCL

0.10

0.12

S Chart
for yarn



0.08



0.06













0.04















































0.02
0.00

YARNCOUNT SD



LCL
1



3

5

7

9

11



13

15

17

19

21

Group

Figure 16: R chart

37

23

25

27

29

31

33

Table 8: 11.8 Formulae and constants for R-chart
subgroup size

Factor for control limits

n
D1
D2
D3
D4
2
0
3.686
0
3.267
3
0
4.358
0
2.575
4
0
4.698
0
2.282
5
0
4.918
0
2.115
6
0
5.078
0
2.004
10
0.687
5.549
0.223
1.777
15
1.203
5.741
0.347
1.653
20
1.549
5.921
0.415
1.585
25
1.806
6.056
0.459
1.541
Control limits
For Analysing Past Production for Control (Standards Unknown):
¯
central line =R
¯ and D4 R
¯
control limits = D3 R
For Controlling Quality during Production (Standards Known):
central line = d2 σ 0 = Rn0
control limits = D1 σ 0 and D2 σ 0 or D3 Rn0 and D4 Rn0

38

Factor for central
line
d2
1.128
1.693
2.059
2.326
2.534
3.078
3.472
3.735
3.931

None of the plotted points breach the UCL and hence we will conclude that the
variability within the process is in control.
Now consider the computation of control limits for the R-chart.

7.4

R chart

¯ and UCL = D4 R
¯ with the central
The control limits are given by LCL = D3 R
¯
line being R. For the given subgroup size of 5, one finds the factors for control
¯ =
limits D3 = 0 and D4 = 2.115 from Table 11.8. For YARNCOUNT data, R
0.12715 and hence
LCL = 0(0.12715) = 0 and UCL = 2.115(0.12715) = 0.2689.
These control limits are then placed on a run chart of Ri values with a reference
¯ = 0.12715. Figure 17 is the R-chart for YARNCOUNT data:
central line for R

UCL

























0.15













2

3

4

5

6





























0.00

0.05

0.10

YARNCOUNT Range

0.20

0.25

0.30

R Chart
for yarn

LCL



1

7

8

9



10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
Group

Figure 17: R Chart

Again the R chart suggests that the variability within the process is under
control.

7.5

Revision of Control Charts

When a signal (which could possibly be a false alarm) is obtained for lack of
control from a control chart, one usually looks for the presence of special causes.
39

In the event of finding and eliminating a special cause, it is necessary to revise
the control limits deleting the subgroup(s) which signalled the presence of special causes. It may also be possible that this identified special cause affected the
process represented by certain subgroups adjacent to the one that signalled its
presence. Any such subgroup which was influenced by the special cause variables
must be dropped. In other words, we identify a set of subgroups that represents
a process subjected to only chance or common causes. A point breaching the
¯
X-chart
limits need not necessarily be a breaching point on the R- or S-chart
(and vice versa). Such a point need not be dropped from the associated Ror S-chart; nor will it call for a revision. For a normally distributed quality
¯ and the sample variance S 2 are independently
characteristic X, the mean X
distributed, and hence such an action may be justified.
For YARNCOUNT data, all the points lie within the control limits. Hence,
the Standard values for the mean and standard deviation for Phase II charting
are set as:
¯ 0 = 40.01
Standard value for mean = X
¯ 4 = 0.0607.
Standard value for standard deviation = σ 0 = S/c
¯
¯ and S-charts for
Using the fixed standard values, X and R-charts or X
are drawn using the Standards Known control limit constants. This chart is
for future use, i.e. for real time or Phase II control. That is, the production
process will be sampled for quality monitoring, i.e. subgroups will be formed
one by one. As soon as a subgroup is formed, the point will be plotted on the
control charts for the standards known case.
If any later subgroup is found to be associated with a special cause, then
it is necessary to disregard the subgroup while revising the standards which is
usually done once 50 or 100 subgroups are taken.

7.6

Control chart for Individual Values

For some processes such as chemical processes, it is not possible to define a
subgroup in terms of discrete number of n items on which measurements can
be made. For example, a bulk (composite) sample of milk powder taken at a
given time will yield only one measured value on a quality characteristic such as
percentage milk fat. So for processes where “one at a time” data are collected,
the control chart must be based on individual values. This chart is known as
¯ chart with subgroup size 1.
the I-chart. This chart is similar to a X
For I-charts, the process standard deviation can be estimated in two ways.
For Phase I analysis, m past individual measurements (X1 , X2 , X3 .... Xm )
will be used to obtain the estimate σ
ˆ = S/c4 . The control limits are then set at
¯±3S
X
c4
¯ =
where X

1
m

m
P

Xi .

i=1

The alternative method of estimating σ is to use the (average) moving range,
which is given by
40

m

MR =

m

1 X
1 X
M Ri =
| Xi − Xi−1 |.
m − 1 i=2
m − 1 i=2

That is, σ is estimated as
σ
ˆ=

MR
d2

where d2 is found corresponding to sample size 2). The control limits are then
set at
¯ ± 3 MR
X
d2
Figure 18 shows a typical I-Chart for monitoring viscosity of a chemical.
Figure 18: I-Chart
I−Chart for Viscosity

34.0

34.5

UCL











33.5

Viscosity











33.0



32.5



LCL

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

Group

While the S method of estimation is preferred to the moving range method
on statistical grounds, it poses difficulty in Phase II monitoring. We cannot
compute the standard deviation for a datum. Hence moving ranges are plotted
to obtain a Moving Range Chart. The formulae and control limit constants
given in Table 11.8 will equally apply for the moving range charts.

41

7.7

Time-weighted Charts

Shewhart control charts are useful to quickly detect sudden big shifts occurring
in a production process. However, Shewhart control charts are not sensitive to
detect small shifts in the process level. The supplementary run rules improve the
sensitivity of Shewhart control charts by detecting small process changes. However advanced control charting procedures which provide time varying weights
are more powerful for detecting small changes in the process level. The following
control charts are suitable when higher sensitivity is desired:
1. Moving Average (MA) charts:
These charts are based on the control statistic
¯ t−w+1 + X
¯ t−w+2 + ... + X
¯ t−1 + X
¯t
Mt = X



where w - length or span of the moving average,
Mt - moving average of span w(> 0) at time t,
¯ t −the current subgroup average.
X
2. Exponentially Weighted Moving Average (EWMA) Charts:
The exponentially weighted moving average (EWMA) is defined as
¯ t + (1 − λ) Zt−1
Zt = λX
where λ (0 < λ ≤ 1) is a constant. The charting procedure will be based
on the EWMA.
3. CUSUM charts:
The cumulative sum (CUSUM) charts accumulate the deviation from the
target level after allowing for some slackness due to common causes. If
only common causes are present, then the deviations from the target will
be both positive and negative and hence the CUSUM will not grow big.
If the CUSUM grows too big, then it is an indication that the process
level has shifted due to special causes. So a decision limit is set for the
CUSUMs, which are similar to the control limits.

8

Attribute Control Charts

By the attribute method we mean the measurement of quality through noting
the presence or absence of some characteristic or property in each of the units,
and counting how many units do not posses the quality characteristic or property or simply counting how many times predefined events of nonconformance
occurred. The advantage of the attribute method is that a single chart can be set
up for several characteristics, whereas a variables chart must be set up for each
of the characteristics with an accompanying chart for controlling variability.
42

8.1

p-Chart for Fraction Nonconforming

This chart is also known as the proportion chart. Instead of proportions, if
percents are used, then the p-chart will stand for percent chart. The p-chart
configuration is intended to evaluate the process in terms of the proportion or
fraction of the nonconforming units. A unit may be classified as nonconforming
based on predefined classification events such as the breach of a specification,
go or not-go gauge, judgment, etc. The unit may be classified as nonconforming
in the presence of a nonconformity, defect, blemish, presence or absence of some
characteristic, etc. The classification may also be based on several characteristics.
Let p stand for the true fraction nonconforming of the process and pˆ be
the sample fraction nonconforming computed as the ratio of the number of
nonconforming units d to the sample size n. That is, pˆ = nd . Commonly d
follows a binomial distribution with parameters n and p i.e.


n
P (d) =
pd (1 − p)n−d d = 0, 1, 2.....n.
d
The mean and variance of pˆ are p and p(1 − p)/n respectively. If the true value
of p is known, the control limits become
r
p(1 − p)
p±3
n
with the central line being at p. Here p could be a standard value p0 .
Suppose that the true fraction nonconforming is unknown. As usual, it is
assumed that the total number of units tested from the process is subdivided into
m rational subgroups consisting of n1 , n2 ,. . . ,nm units respectively and a value
of the proportion defective is computed for each subgroup. For convenience, let
us assume that the subgroup sizes are all equal. If di is the number of defectives
found in the ith subgroup, then the estimate of p is pˆi = dni . The average of
various pˆi values is
p¯ =

m
X
di
mn
i=1

.
The control limits are set at
r
p¯ ± 3

p¯ (1 − p¯)
.
n

For example, consider the data given as Table 9 on the number of defectives
obtained for 50 subgroups of 100 resistors drawn from a process. The value of
p¯ is 0.01. The control limits are found as
r
0.01 (1 − 0.01)
0.01 ± 3
100
43

or 0 to 0.03985. If the computed value for LCL is negative, it is set at zero. This
means that there is no ‘control’ exercised to detect any quality improvement.
Figure 19 provides the p-chart for the above data. Table 9 also gives the pˆi
values needed for plotting the p-chart.
Table 9: Nonconforming resistors in various subgroups
i
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

di
0
0
2
0
1
0
2
2
1
1
0
2
1
1
1
0
0
0
2
3
1
2
0
1
1

pˆi
0.00
0.00
0.02
0.00
0.01
0.00
0.02
0.02
0.01
0.01
0.00
0.02
0.01
0.01
0.01
0.00
0.00
0.00
0.02
0.03
0.01
0.02
0.00
0.01
0.01

i
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50

di
0
0
1
0
0
1
3
0
1
2
2
0
2
2
1
1
1
3
2
1
1
0
0
0
2

pˆi
0.00
0.00
0.01
0.00
0.00
0.01
0.03
0.00
0.01
0.02
0.02
0.00
0.02
0.02
0.01
0.01
0.01
0.03
0.02
0.01
0.01
0.00
0.00
0.00
0.02

If the subgroup sizes are unequal, then p is estimated as
P
di
p¯ = P
ni
and the (varying) control limits are given by
s
p¯ (1 − p¯)
p¯ ± 3
.
ni
P
1
Alternatively, an ‘average’ sample size n
¯=m
ni can be used.
44

Figure 19: p-chart
UCL

0.02







0.00

0.01

proportion defective

0.03

0.04

p−Chart for Resistor Defectives



LCL



1





3









5













7

9

11



13

15











17









19



21





23

25







27

29









33







31











35

37





39

41

43

45

47





49

Group

8.1.1

Choice of Subgroup Size

Sometimes it may be desirable to have a lower control limit greater than zero
in order to look for samples that contain no defectives or to detect quality
improvement. If p is small, obviously, the subgroup size should be very large.
For example, for p = 0.01, the minimum subgroup size must be 891 for LCL
to be greater than zero. Such large subgroup sizes are not practical and hence
supplementary run tests based on several subgroups are employed to detect
quality improvement.

8.2

np-chart

The np-chart is essentially a p-chart, the only difference being the observed
number of defectives is directly plotted instead of the observed proportion defective. If p is the proportion defective, then d, the number of defectives in the
subgroup size n, follows
p a binomial distribution whose expected value is np with
standard deviation np(1 − p). Here p could be a standard value for Phase II
control charting. When no standards
p are available, one uses the p¯ estimate and
draws the control limits at n¯
p ± 3 n¯
p (1 − p¯). The central line is drawn at n¯
p.
The OC function of the np chart is similar to that of the p-chart and on the
X-axis one plots the d values instead of p values.

45

8.3

c-chart for Counts

By the term area of opportunity, we mean a unit or a portion of material,
process, product or service in which one or more predefined events occur. The
term is synonymous with the term unit and is usually preferred where there is
no natural unit, e.g. continuous length of cloth.
By defect, we mean the departure of a characteristic from its prescribed specification level that will render the product or service unfit to meet the normal
usage requirements. By nonconformity, we mean a product which may not meet
the specification requirements but may meet the usage requirements. For example, dirt in a block of cheese is a defect but underweight is just nonconformity.
By c or count, we mean the total number of predefined events occurred in a
given area of opportunity (sample). The c-chart or count chart is a configuration
designed to evaluate the process in terms of such count of events (e.g. count of
defects or nonconformities occurred in a sample). Note that no classification of
units as conforming or not occurs while counting the events. If so,the relevant
chart is the p-chart.
We assume that the number of nonconformities d follows a Poisson distribution whose mean and variance are equal to the parameter c. That is, d follows:
p (d) =

e−c cd
d!

d = 0, 1, 2...

(c > 0).

The mean and the variance of d are the same and equal to c. Hence the control
limits for count d (with three sigma spread) are given by

c ± 3 c,
the central line being c. If LCL is less than zero, it is set at zero. Here c could
be a standard value. In its absence, c is estimated as the average number of
nonconformities in a sample, say c¯, and the control limits are set at

c¯ ± 3 c¯
Consider Table 10 showing the number of nonconformities observed in 20
subgroups of five cellular phones each. The value of c¯ is 84/20 = 4.2. The
control limits are then found as

4.2 ± 3 4.2
or 0 to 10.4 and the central line is set at 4.2. One plots the total number of
defects found in each subgroup thereafter in the c-chart shown as Figure 20.
While using a c-chart, a signal for a special cause may require further analysis
using a cause and effect diagram.

46

Table 10: Number of Nonconformities in various subgroups
subgroup
1
1
1
1
1
2
2
2
2
2
3
3
3
3
3
4
4
4
4
4

d
3
0
1
1
0
1
0
0
1
0
0
2
0
1
2
2
0
0
0
2

subgroup
5
5
5
5
5
6
6
6
6
6
7
7
7
7
7
8
8
8
8
8

d
0
1
2
2
1
0
0
2
1
2
0
0
0
0
0
1
0
1
0
0

subgroup
9
9
9
9
9
10
10
10
10
10
11
11
11
11
11
12
12
12
12
12

d
1
1
1
2
2
0
1
0
1
0
3
0
2
0
1
1
0
0
0
0

subgroup
13
13
13
13
13
14
14
14
14
14
15
15
15
15
15
16
16
16
16
16

d
1
2
1
2
1
0
0
2
1
0
1
2
1
1
1
0
1
0
0
1

subgroup
17
17
17
17
17
18
18
18
18
18
19
19
19
19
19
20
20
20
20
20

Figure 20: c-chart
c−Chart for Cellphone Defects

8

10

UCL

6





no. of defects





4

















2













0



LCL



1

2

3

4

5

6

7

8

9

10

11

Group

47

12

13

14

15

16

17

18

19

20

d
1
1
2
0
2
2
0
1
2
2
0
0
2
0
4
1
0
0
1
0

8.4

u-chart

The u-chart or count per unit chart is a configuration to evaluate the process
in terms of average number of predefined events per unit area of opportunity.
The u-chart is convenient for a product composed of units whose inspection
covers more than one characteristic such as dimension checked by gauges, other
physical characteristics noted by tests, and visual defects observed by eye. Under
these conditions, independent defects may occur in one unit of product and a
preferred quality measure is to count all defects observed and divide by the
number of units inspected to give a value for defects per unit (rather than a
value for the fraction defective). Here only the independent defects are to be
counted. The u-chart is particularly useful for products such as textiles, wire,
sheet materials, etc, which are continuous and extensive. Here the opportunity
for defects/nonconformities is large even though the chance of a defect at one
particular spot is small.
The total number of units tested is subdivided into m rational subgroups of
size n each. Here n can be in fractions. For each subgroup, a value of u, the
defects per unit, is computed. The average number of defects is found as
u
¯=

total number of defects in all samples
.
total number of units in all samples

Assuming that the number of defects follows the Poisson distribution, the control
limits of the u-chart are given by,
r
u
¯
u
¯±3
n
For unequal subgroups, u
¯ is found as
P
nu
Pi i
ni
where ni is the ith subgroup size and ui is the number of defects per unit in the
ith subgroup. Here n1 ,n2 . . . need not be whole numbers, e.g. the length of the
cloth inspected may be 2.4m. The control limits are set at
r
u
¯
u
¯±3
ni
The u-chart for the cellular phone data is given as Figure 21.

9

Acceptance Sampling

Acceptance sampling is the methodology by which decisions to accept or not
accept (a lot or a series of lots usually) are based on the results of the inspection
of samples. Acceptance sampling is preferred when:
48

Figure 21: u-chart
u−Chart for Cellphone Defects

1.5

2.0

UCL





1.0

defects per cellphone





















0.5













0.0



LCL



1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

Group

• Testing is destructive.
• Cost and time for 100% inspection are high.
• Less handling of product is necessary, when handling can cause for example
degradation of product.
• There are limitations of work force.
• Serious product liability risks exist
• At the pre-shipment and receiving inspection stages.
The disadvantage of acceptance sampling is the risk of accepting bad lots and
rejecting good lots. Acceptance sampling when applied on the final product
simply accepts and rejects lots and hence does not provide any direct form
of quality improvement. Prof. Dodge, the originator of acceptance sampling,
therefore stressed that one cannot inspect quality into a product.
Acceptance sampling plan is a specific plan that clearly states the rules
for sampling and the associated criteria for acceptance or otherwise. Acceptance
sampling plans can be applied for inspection not only end items but for the
inspection of (i) components, (ii) raw materials, (iii) operations, (iv) materials
in process, (v) supplies in storage, (vi) maintenance operations, (vii) data or
records and (viii) administrative procedures etc. Acceptance sampling is also
commonly employed for safety related inspection by governmental departments,
particularly when goods are imported.
49

9.1

Single Sampling Attributes Plan (n, Ac)

The operating procedure of the single sampling attributes plan is as follows:
• From a lot of size N , draw a random sample of size n and observe the
number of nonconforming units (nonconformities) d.
• If d is less than or equal to the acceptance number Ac, the maximum
allowable number of nonconforming units (nonconformities), accept the
lot. If d > Ac, do not accept the lot.
The symbol Re (= Ac+1) is used to denote the rejection number. Nonacceptance does not always imply rejection of the batch. Actions such as salvaging, scrapping, screening or rectifying inspection etc may also follow instead
of total rejection of the batch.
The acceptance quality limit (AQL, also called Acceptable Quality Level
in older literature) is the maximum percentage or proportion of nonconforming
units (or nonconformities) in a lot that can be considered satisfactory for the
purpose of acceptance sampling. When a consumer designates some specific
value of AQL, the supplier or producer is notified that the consumer’s acceptance sampling plan will accept most of the produced lots submitted by the
supplier, provided the process average of these lots is not greater than the designated value of AQL. It must be understood that the specification of AQL is
only for sampling purposes, and is not a licence to knowingly allow nonconforming items.
Suppose that an error-free 100% inspection is done to observe the true fraction nonconforming p for each lot. Then all lots with p ≤ AQL will be accepted
and all lots with p > AQL will not be accepted. This situation is shown graphically in Figure 22 for an AQL of 0.1%.
Due to sampling, one faces the risk of not accepting lots of AQL quality as
well as the risk of accepting lots of poorer than AQL quality. One is therefore
interested in knowing how an acceptance sampling plan will accept or not accept
lots over various lot qualities. A curve showing the probability of acceptance
over various lot or process qualities is called the operating characteristic (OC)
curve and discussed in the next section.

9.2

Operating Characteristic (OC) Curve

The OC curve reveals the performance of a sampling inspection plan in discriminating good and bad lots. There are two types of OC curves:
Type A: (For isolated or unique lots) This is a curve showing the probability
of accepting a lot as a function of the lot quality.
Type B: (For a continuous stream of lots) This is a curve showing the probability of accepting a lot as a function of the process average. That is, the

50

0.6
0.4
0.0

0.2

Probability of Acceptance

0.8

1.0

Figure 22: Ideal OC Curve

AQL
0.000

0.001

0.002

0.003

0.004

0.005

Fraction Nonconforming

Type B OC curve will give the proportion of lots accepted as a function
of the true process fraction nonconforming p.
A typical OC curve is shown in Figure 23.

9.2.1

OC Function of a Single Sampling Plan

The OC function of the single sampling attribute plan giving the probability of
acceptance for a given lot or process quality p is:
Pa = Pa (p) = P r(d ≤ Ac | n, Ac, p).
For Type A situations, the hypergeometric distribution is exact for the case
of nonconforming units. Hence,
Pa = Pa (p) = P r(d ≤ Ac | N, n, Ac, p).

=

Ac
X
d=0

D
d



N −D
n−d


N
n



where N is the lot size, D is the number of defectives in the lot and hence the
lot fraction nonconforming p = D/N .
51

0.6
0.4
0.0

0.2

Probability of Acceptance

0.8

1.0

Figure 23: A typical OC Curve

0.00

0.01

0.02

0.03

0.04

0.05

Fraction Nonconforming p

For Type B situations, the binomial model is exact for the case of fraction
nonconforming units and the OC function is given by
Pa (p) = P r(d ≤ Ac | n, Ac, p)

Ac 
X
n
=
pd (1 − p)n−d
d
d=0

The above OC function is applicable to a continuous stream of lots and can be
used as an approximation to Type A situation when N is large compared to
n (n/N < 0.10) and p is small.
For the case of nonconformities per unit, the Poisson model is exact for both
Type A and Type B situations. The OC function in this case is
Pa (p) = P r(d ≤ Ac | n, Ac, p)

=

Ac −np
X
e
(np)d

d!

d=0

The above OC function is also used as an approximation to binomial when
n is large and p is small such that np < 5. In general, the probability of
acceptance will be underestimated at good quality levels and overestimated at
52

poor quality levels if one approximates the hypergeometric OC function with
a binomial or Poisson OC function. The same is true when the binomial OC
function is approximated by the Poisson OC function. This is shown graphically
in Figure 24 assuming N = 200, n = 20 and Ac = 1.

1.0

Figure 24: Comparison of OC Curves

0.6
0.4
0.0

0.2

Probability of Acceptance

0.8

Poisson
bionomial
hypergeometric

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

fraction nonconforming p

9.2.2

Effect of n and Ac on the OC curve

It can be observed that when n is constant and Ac is increased, Pa (p) will
increase. When Ac is constant and n is increased, then Pa (p) will decrease (see
Figures 25 and 26). These two properties are useful for designing a sampling
plan of desired discrimination.

Suppose that a company manufacturing cheese is continuously supplying its
production to two supermarkets. Supermarket A requires cheese in lots of 5000
units and supermarket B prefers lots of size 10000. Assume that the producer
uses two sampling plans with sample size equal to the square root of the lot size.
For both plans, let the acceptance number be fixed as one. The company’s true
fraction of nonconforming cheeses produced is 1%, which is considered as the
acceptable quality level by the consuming supermarkets. The effectiveness of the
53

1.0

Figure 25: Effect of Acceptance Number on OC Curve

0.6
0.4
0.0

0.2

Probability of Acceptance

0.8

n=50, Ac=1
n=50, Ac=2

0.00

0.05

0.10

0.15

fraction nonconforming p

1.0

Figure 26: Effect of Sample Size on OC Curve

0.6
0.4
0.2
0.0

Probability of Acceptance

0.8

n=50, Ac=1
n=100, Ac=1

0.00

0.05

0.10
fraction nonconforming p

54

0.15

sampling plans (plan for Supermarket A: n = 71, Ac = 1; plan for supermarket
B: n = 100, Ac = 1) is revealed by the respective OC curves shown in Figure 27.

1.0

Figure 27: Comparison of OC Curves for a Given AQL

n=71, Ac=1
n=100, Ac=1

0.6
0.4
0.0

0.2

Probability of Acceptance

0.8

AQL

0.00

0.02

0.04

0.06

0.08

0.10

fraction nonconforming p

The proportion of lots of AQL quality accepted by the two plans are:
• Pa (AQL) = 84% for n = 71, Ac = 1 plan
• Pa (AQL) = 74% for n = 100, Ac = 1 plan.
It is also evident that the plan used for supermarket B is tighter than the plan
used for supermarket A. For the fixed acceptance number, an increase in sample
size means tightening of inspection. It is always desired that the probability of
acceptance at AQL be higher, such as 95%. Both the plans do not have a high
Pa at AQL. Plan n = 71 and Ac = 1 is preferable to plan n = 100, Ac = 1
since Pa (AQL) = 84% is closer to 95%. The manufacturer is regularly supplying
cheese to both supermarkets. Under the Type B situation of series of lots being
submitted, the lots are themselves viewed as random samples from the process
producing cheese. One therefore need not sample in relation to the lot size. If
it is desired to encourage large lot sizes, then the acceptance number should be
accordingly adjusted so that the Pa at AQL is higher for large lot sizes.
Arguments in favour of the n = 100, Ac = 1 plan can also be given. The
consuming supermarkets must be protected against bad quality lots. For example, lots having 5% nonconforming cheeses may be required to be rejected with
55

a large probability to protect the consumer interests. The plan n = 100, Ac =
1 is tighter and has a smaller probability of acceptance at the rejectable quality
level namely 5% nonconforming (see Figure 28).

1.0

Figure 28: Comparison of OC Curves for a Given LQL

0.6
0.4
0.2

Probability of Acceptance

0.8

n=71, Ac=1
n=100, Ac=1

0.0

LQL

0.00

0.02

0.04

0.06

0.08

0.10

fraction nonconforming p

Thus it is seen that it will be worthwhile to prescribe an additional index for
consumer protection since the AQL does not completely describe the protection
to the consumer. Hence for consumer protection against bad quality lots, the
Limiting Quality Level (LQL) or simply the limiting quality (LQ) is defined
as the percentage or proportion of nonconforming units in a lot for which the
consumer wishes the probability of acceptance to be restricted to a (specified)
low value.

The producer’s risk (α) is the probability of not accepting a lot of AQL
quality and the consumer’s risk (β) is the probability of accepting a lot of
LQL quality. Generally the parameters of the (single) sampling plan must
be determined for given quality levels, namely AQL, LQL and risks α and β.
Figure 29 shows the quality indices AQL, LQL and the associated risks α (=5%)
and β (=10%) respectively on a typical OC curve. In practice, the sample size
and the acceptance number will be chosen for given prouder’s and consumer’s
risk points (i.e. AQL, LQL, α and β).
56

1.0

Figure 29: OC Curve Showing AQL, LQL, α and β

α = 0.05

0.6
0.4
0.2

Probability of Acceptance

0.8

(AQL, 0.95)

(LQL, 0.1)

0.0

β = 0.1
0.000

0.005

0.010

0.015

0.020

0.025

0.030

0.035

Fraction Nonconforming p

9.2.3

Average Sample Number (ASN)

The average sample number (ASN) is defined as the average number of
sample units per lot needed for deciding acceptance or non-acceptance. For a
single sampling plan, one takes only a single sample of size n and hence the ASN
is simply the sample size n. However the sampling inspection can be curtailed.
By curtailed inspection, we mean the stopping of sampling inspection, when
a decision is certain. Inspection can be curtailed when the rejection number
is reached since the rejection is certain and no further inspection is necessary
in reaching that decision. Such curtailment of inspection for rejecting a lot is
known as semi-curtailed inspection. If inspection is curtailed once acceptance or
rejection is evident, then it is known as fully curtailed inspection. For example,
consider the single sampling plan with n = 50 and Ac = 1. Let the sample be
randomly drawn and testing of units takes place unit-by-unit. One can curtail
inspection, rejecting the lot, as early as the third unit if the first two units are
nonconforming. Similarly if all the first 49 units are found to be conforming,
then the lot can be accepted without testing the last unit.
Generally it is undesirable to curtail inspection in a single sampling plan.
The whole sample is usually inspected in order to have an unbiased record of
quality history.

57

9.3

Double Sampling Plans

Single sampling plans are simple to use. Very often the producer is at a ‘psychological’ disadvantage if a single sampling plan is applied to the lots, since no
second chance is given for the lots not accepted. In such situations, taking a
second sample is preferable.
The operating procedure of the double sampling plan is given in the following
steps:
1. First draw a random sample of size n1 , and observe the number of nonconforming units (nonconformities) d1 .
2. If d1 ≤ Ac1 , the first stage acceptance number, accept the lot. If d1 ≥ Re1 ,
the first stage rejection number, reject the lot. If Ac1 < d1 < Re1 , go to
Step 3.
3. Take a second random sample of size n2 and observe the number of nonconforming units (nonconformities) d2 . Cumulate d1 and d2 , and let
D = d1 + d2 . If D ≤ Ac2 , the second stage acceptance number, accept the
lot. If D ≥ Re2 (= Ac2 + 1), reject the lot.
The operating flow diagram for the double sampling plan is given as Figure 30.
Figure 30: Operation of Double Sampling Plan

Draw n1 and observe d 1
d1 ≤ Ac1
ACCEPT

d1 ≥ Re1
Ac1 < d1 < Re1

D ≤ Ac2

REJECT
D ≥ Re2

Draw n2 and observe d2
Let D = d 1 + d2

The double sampling plan described above can be compactly represented as
shown in Table 11:
The double sampling plan has five parameters, n1 , n2 , Ac1 , Re1 and Ac2
(=Re2 − 1). The major advantage of the double sampling plan is that it can
be designed to have a smaller ASN when compared to an ‘equivalent’ single
sampling plan having approximately the same OC curve. The double sampling
58

Table 11: Double Sampling Plan
Stage
1
2

Sample
Size
n1
n2

Acceptance
Number
Ac1
Ac2

Rejection
Number
Re1
Re2

plan is relatively harder to administer than the single sampling plan. If the
parameters of the double sampling plan are not properly fixed, it may even be
inefficient when compared to a single sampling plan.

9.4

Multiple Sampling Plan

The multiple sampling plan is a natural extension of the double sampling plan.
The number of stages in a multiple sampling plan is usually fixed at 7. A
multiple sampling plan will require a smaller sample size than a double sampling
plan but is more complex to implement. A multiple sampling plan having m
stages can be compactly represented as in Table 12:
Table 12: Multiple Sampling Plan
Stage
1
2
3
.
.
.
m

Let Di =

i
P

Sample
Size
n1
n2
n3
.
.
.
nm

Acceptance
Number
Ac1
Ac2
Ac3
.
.
.
Acm

Rejection
Number
Re1
Re2
Re3
.
.
.
Rem
(=Acm +1)

dj , i = 1, 2, ..., m. The lot is accepted if Di ≤ Aci and rejected

j=1

if Di ≥ Rei , i = 1, 2...m. Further sampling of ni+1 units is carried out if
Aci < Di < Rei . It is also usual in multiple sampling not to allow acceptance in
the initial stages whenever small sample sizes are employed. For administrative
reasons, it is also usual to fix n1 = n2 = . . . = nm and m=7. The generalisation
of the multiple sampling plan is known as the sequential sampling plan where
items will be tested one by one. The sequential plan is harder to use in practice
due to administrative difficulty and is preferable only when testing is costly or
destructive.

59

9.5

International Standards for sampling Inspection

If the quality history is good, as evidenced by past lot acceptances, it is desirable
to reduce the amount of sampling. If the quality history is found to be bad (more
lot rejections), then it is essential to tighten the inspection either by increasing
the sample size or reducing the acceptance numbers. In such situations, the
use of more than one sampling plan is desirable with associated rules for using
each of the plans. Most acceptance sampling Standards formulated by the
International Standards Organisation (ISO) incorporate such switching rules
and provide comprehensive sampling schemes. A sampling scheme is a specific
set of procedures which usually consists of two or three individual acceptance
sampling plans in which lot sizes, sample sizes, and acceptance criteria, or the
amount of 100% inspection, etc will be related.
There are several ISO Standards for sampling inspection. These Standards
also provide the OC and ASN properties of the tabulated sampling schemes.
They also cover the variables method of sampling where a measurement is made
on a continuous scale on each unit of inspection as against the attribute method.
The following are the popular sampling standards used in industry.
1. ISO 2859-0, Sampling procedures for inspection by attributes – Part 0:
Introduction to the ISO 2859 attribute sampling system
2. ISO 2859-1, Sampling procedures for inspection by attributes – Part 1:
Sampling schemes indexed by acceptance quality limit (AQL) for lot-bylot inspection
3. ISO 2859-2, Sampling procedures for inspection by attributes – Part 2:
Sampling plans indexed by limiting quality (LQ) for isolated lot inspection
4. ISO 2859-3, Sampling procedures for inspection by attributes – Part 3:
Skip-lot sampling procedures
5. ISO 2859-4, Sampling procedures for inspection by attributes – Part 4:
Procedures for assessment of declared quality levels
6. ISO 3951-1, Sampling procedures for inspection by variables – Part 1:
Specification for single sampling plans indexed by acceptance quality limit
(AQL) for lot-by-lot inspection for a single quality characteristic and a
single AQL

9.6

Summary

Quality is a latent variable and need not always imply excellence. How far the
needs of customers and the society in general are met in a cost effective manner
is the operational way to assess quality.
Statistical methods are important to understand the quality of products (or
services), its measurement and improvement. Quality is inversely proportional
to variability, which can be expressed only in statistical terms. So for improving
quality, we must reduce the variability involved.
60

Statistical thinking is important in understanding a process and for isolating the key variables causing variation. Experimental designs play a key role in
achieving the optimum settings for controllable variables in a production process. New nuisance variables and unusual special cause conditions may arise
during the production affecting quality. Hence control charts are employed to
ensure that a state of statistical control exists during the production in order
to hold the gains.
Variables charts consider certain important quality characteristics measur¯ chart is used to monitor the process mean or level.
able on a continuous scale. X
¯ chart to monitor the increase or decrease in
S or R charts accompany the X
common cause variability. Attribute control charts such as p-chart are employed
to monitor the level of nonconformance for several characteristics, quality attributes and other specification requirements. The sensitivity of 3-sigma limits
is improved by supplementary run tests.
Acceptance sampling provides quality assurance. Sampling plans such as single sampling plans are employed not only in the disposition of the final product
but also for procurement quality assurance etc.
Quality must be built into products and services because it provides a competitive edge and higher profitability. Statistics plays a useful role along with
engineering, management, psychology and other disciplines in achieving quality.

9.7

References

ASQ Statistics Division. (2004). Glossary and Tables for Statistical Quality
Control, ASQ Quality Press, Milwaukee, Wisconsin, USA.
Montgomery, D. C. (1996), Introduction to Statistical Quality Control, Third
Edition, John Wiley & Sons, New York, NY.
Shewhart, W. A. (1931). Statistical Method from an Engineering Viewpoint,
Journal of the American Statistical Association, 26, pp. 262-269.
Sohn, H. S. and Park, T. W. (2006). Process Optimization for the Improvement of Brake Noise: A Case Study, Quality Engineering, 18, pp. 131-143

61

Exercises
11.1 How would you provide numerical measures of the quality of the service
provided by the following?
a. Postal Mail
b. A university canteen
11.2 A city council used to distribute annually paper garbage bags to its ratepayers. It decided to replace the paper bags with plastic ones. The plastic
bags are cheaper and thinner compared to the (heavier) paper bags. Experimental studies showed that both types of bags degrade approximately
at the same time.
Samples of bags submitted by few plastic bag manufacturers were inspected to short-list a supplier. An order was then placed with a supplier
to manufacture the plastic bags in packets of size 52. The manufacturer
supplied the plastic bag packets in large batches over time which were then
distributed to the rate-payers (without any batch by batch inspection).
A number of rate-payers complained on the quality of the plastic garbage
bags supplied to them. The main complaints were (i) the plastic bags
were not strong enough to hold the usual amount and type of waste and
(ii) some packets contained less than 52 bags and hence insufficient for
an year. It was found that the use of excessive recycled plastic for some
production periods caused the strength problems (i.e. splitting etc). It
was claimed that the under-count of bags was a matter of chance and not
a deliberate one.
a. Describe the meaning of the term quality for the plastic bags. What
difficulties are involved in comparing the plastic bags with the paper
ones? Explain your answer considering definition of quality as ‘the
totality of features and characteristics of a product or service that
bear on its ability to satisfy given needs’.
b. What quality measures can be used for the plastic bag quality?
c. Why Taguchi’s philosophy of ‘deviation from the target is a loss to
society’ is more appropriate in the context of garbage bag quality?
11.3 In finance, the efficiency of a stock market is assessed based on whether or
not the daily returns are randomly distributed. Assume that the normal
distribution models the return variation due to common causes. Use the
NASDAQ daily index and show graphically how dominant special and
common causes are.
11.4 A company is offering financial incentives to sales personnel based on their
share of weekly sales. Does this strategy recognise the existence of common
and special causes of variation in sales? Why this strategy may affect the
staff morale? Discuss.
62

11.5 Apply PDSA approach to a common activity such as “Keeping in Touch
with Relatives and Friends”. Write down all the steps involved and discuss
any improvements made. What were the issues faced, and unresolved (if
any).
11.6 A textile mill collected data on the quality of yarn at different time points
and observed the number of defects due to various causes. The data are
shown in Table 13. Draw a Pareto Chart and offer your comments.
Table 13: Yarn Defect Data
Subgroup

1
2
3
4
5
6
7
8
9
10

Number
of Leas
tested
100
100
100
100
100
100
100
100
100
100

Count
Not
Met
2
3
4
3
2
1
8
5
2
1

Low
CSP

Thick
Places

Thin
Places

Others

1
1
1
3
1
1
2
1
0
0

0
1
2
4
0
0
1
1
0
0

2
1
0
2
0
0
1
0
0
1

0
1
1
4
0
2
1
0
0
1

11.7 Identify the type of graphical quality tool displayed in Figure 31 and state
its uses:
Figure 31: A QC graph
XYZ Company, PQRS.

xxx

x

xxxxx

Sampler:__________________________________
No. of Gears Inspected:_____

Date:________
No. of Burrs:________

63

11.8 Table 14 gives the data on the number and causes of rejection of metal
castings observed in a foundry.
a. Design a Check Sheet which would have enabled the collection of
these data.
b. Prepare a Pareto Chart for causes of poor metal castings and offer
your recommendations.

Table 14: Castings defect data
Day

1
2
3
4
5
6
7
8
9
10

no.
of
metal
castings
20
20
20
20
20
20
20
20
20
20

Sand

Misrun

Shift

Drop

Core
break

Broken

Others

2
3
4
3
2
1
8
5
2
1

1
1
1
3
1
1
2
1
0
0

0
1
2
4
0
0
1
1
0
0

2
1
0
2
0
0
1
0
0
1

0
1
1
4
0
2
1
0
0
1

0
0
1
0
0
0
0
0
1
1

1
0
1
0
0
1
0
0
0
2

11.9 Table 15 gives the data (25 samples of size five each taken at equal time
intervals) on the inside diameter of piston rings for an automotive engine
produced by a forging process (data from Montgomery, D. C., Introduction
to Statistical Quality Control, John Wiley & Sons, Second Edition). This
data set is also used in one of the subsequent the exercise. Perform EDA
of the retrospective data using the following tools, and discuss whether
or not your EDA discovered anything alarming to call for an engineering
investigation of special causes.
a. histogram
b. run chart
11.10 A large distributing company procures eggs and stores them in thermostabilized conditions. For the packaging process of stored eggs, the following quality characteristics were employed.
– WEIGHT (Specifications: 65±5g)
– HAUGH units- an index for the interior quality of eggs (Specification:
≥81 units)
– APPEARANCE (visual test-Pass or Fail)
The retrospective data collected on the above variables are given in Table 16.
64

Table 15: Piston Ring Diameter Data
Sample
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

Obs 1
74.030
73.995
73.988
74.002
73.992
74.009
73.995
73.985
74.008
73.998
73.994
74.004
73.983
74.006
74.012
74.000
73.994
74.006
73.984
74.000
73.988
74.004
74.010
74.015
73.982

obs 2
74.002
73.992
74.024
73.996
74.007
73.994
74.006
74.003
73.995
74.000
73.998
74.000
74.002
73.967
74.014
73.984
74.012
74.010
74.002
74.010
74.001
73.999
73.989
74.008
73.984

obs 3
74.019
74.001
74.021
73.993
74.015
73.997
73.994
73.993
74.009
73.990
73.994
74.007
73.998
73.994
73.998
74.005
73.986
74.018
74.003
74.013
74.009
73.990
73.990
73.993
73.995

obs 4
73.992
74.011
74.005
74.015
73.989
73.985
74.000
74.015
74.005
74.007
73.995
74.000
73.997
74.000
73.999
73.998
74.005
74.003
74.005
74.020
74.005
74.006
74.009
74.000
74.017

obs 5
74.008
74.004
74.002
74.009
74.014
73.993
74.005
73.988
74.004
73.995
73.990
73.996
74.012
73.984
74.007
73.996
74.007
74.000
73.997
74.003
73.996
74.009
74.014
74.010
74.013

Table 16: Egg Quality Data
Subgroup
1
1
1
1
2
2
2
2
3
3
3
3
4
4
4
4
5
5
5
5

Egg
weight
66.06
65.92
63.13
64.75
64.39
64.91
66.29
65.25
65.60
63.50
65.67
66.58
65.66
64.41
65.42
64.20
63.62
65.62
64.53
64.99

Haugh
Unit
83.16
82.94
87.19
87.39
86.07
87.20
85.86
84.59
87.05
88.03
82.99
82.93
86.12
87.47
86.37
86.87
83.15
84.94
87.71
84.51

Appearance

Subgroup

Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Fail
Pass
Pass
Pass

6
6
6
6
7
7
7
7
8
8
8
8
9
9
9
9
10
10
10
10

65

Egg
weight
65.11
63.94
65.28
65.07
64.91
65.74
67.11
64.40
65.50
65.61
64.09
63.96
63.44
64.45
63.64
63.30
67.17
64.89
63.81
65.30

Haugh
Unit
85.13
85.68
85.46
85.16
83.98
83.73
82.87
83.83
86.40
84.63
84.66
83.56
81.86
85.17
88.02
85.31
83.67
83.02
84.63
88.00

Appearance
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Fail
Pass

a. Draw subgroup-wise boxplots and discuss the variability in Egg weight
& Haugh Unit measurements considering the specifications.
b. Prepare a scatter plot of in Egg weight vs. Haugh Unit and discuss
whether these two quality characteristics can be controlled independently.
c. What proportion of eggs passed all the three specifications?
11.11 If a process was known to be normally distributed with mean zero and
standard deviation one, what values of Cp and Cpk would the process generate if the USL was 3 and LSL was -3 and the target was zero? Generate
a column of random numbers containing 40 observations from a normal
distribution with zero mean and standard deviation of one. Does the result
agree with the theory? Explain why or why not.
11.12 Obtain the relevant process capability measures for Egg weight and Haugh
Unit quality characteristics (Table 16 data).
11.13 An automobile manufacturer is interested in controlling the journal diameter of a rear wheel axle to dimensions 49.995 to 50.005mm. The data
given in Table 17 were collected from the automatic grinding machine
used to manufacture the wheel axle. A machine adjustment was required
following the 14th subgroup.
Table 17: Journal diameter data
subgroup
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

time
9AM
9:30AM
10AM
10:30AM
11AM
11:30AM
12Noon
12:30PM
1PM
1:30PM
2PM
2:30PM
3PM
3:30PM
4PM
4:30PM
5PM
5:30PM

Obs1
50.006
50.007
49.999
49.995
49.996
49.996
50.003
50.000
50.003
50.003
50.000
50.002
49.997
49.990
50.001
50.000
49.998
49.994

Obs2
49.995
49.999
50.006
50.000
49.994
49.999
50.002
50.001
49.999
50.000
49.999
50.004
49.997
49.997
49.995
49.999
50.003
49.997

Obs3
50.001
50.000
50.001
49.994
50.004
49.999
49.999
50.004
49.996
49.999
50.002
50.001
49.999
49.994
49.995
49.995
49.999
49.998

Obs4
49.999
50.000
49.997
49.998
50.000
50.002
50.004
49.998
49.995
50.001
50.004
49.997
49.999
49.994
49.995
49.999
49.995
49.998

a. Obtain the subgroup means, ranges and standard deviations.
b. Obtain the estimate of the common cause sigma using the subgroup
¯ 2 and S/c
¯ 4 respectively).
ranges and standard deviations (i.e., R/d
¯
¯
c. Obtain the X chart control limits using the R/d2 estimate.
¯ chart control limits using the S/c
¯ 4 estimate.
d. Obtain the X
e. Obtain the R chart control limits.
f. Obtain the S chart control limits.
66

¯ and R charts and interpret the signals if any.
g. Draw the X
¯
h. Draw the X and S charts and interpret the signals if any.
i. Discuss whether or not the machine adjustment made at about 3:30pm
was indeed in order.
j. Why the above Phase I control charts cannot be used for future
monitoring?
11.14 Consider the piston ring diameter data (Table 15). Treat the first 20 subgroups for Phase I analysis and the last 5 subgroups for Phase II analysis.
a. Obtain the subgroup means, ranges and standard deviations.
b. Obtain the estimate of the common cause sigma using the subgroup
¯ 2 and S/c
¯ 4 respectively).
ranges and standard deviations (i.e., R/d
¯ chart control limits using the R/d
¯ 2 estimate.
c. Obtain the X
¯ chart control limits using the S/c
¯ 4 estimate.
d. Obtain the X
e. Obtain the R chart control limits.
f. Obtain the S chart control limits.
¯ and R charts and interpret the signals if any.
g. Draw the X
¯ and S charts and interpret the signals if any.
h. Draw the X
¯ and S charts derived from
i. Plot the last 5 subgroup data on the X
the Phase I analysis. Interpret the chart.
11.15 A manufacturer of electronic components checks the resistivity of 100
resistors drawn randomly from each production batch. Table 18 shows
the number of faulty resistors discovered for 140 batches (read through
columns).
Table 18: Faulty Resistor Data
2
5
4
3
1
3
2
5
8
2
2
3
3
3
3
6
2
0
4
1

5
2
3
4
4
4
8
3
2
3
8
2
2
3
5
4
4
4
2
4

2
3
2
3
6
2
0
6
3
1
2
2
4
5
0
1
2
1
4
5

1
2
2
2
8
2
3
1
4
3
2
3
1
6
4
2
3
2
6
1

a. Plot a p-chart for these data.
67

5
3
1
5
2
2
4
2
5
2
4
5
1
4
3
3
3
7
4
3

4
4
2
5
2
1
5
1
4
5
3
3
5
3
4
3
4
1
3
5

3
1
2
0
3
1
4
1
3
4
6
1
3
2
5
1
2
1
3
0

b. Interpret p-chart for the presence of any special causes.
11.16 The historical data collected by a foundry engineer on the number of
defective castings in 80 castings sampled randomly from a day’s production
for a period of 100 days are given in Table 19 (read through columns).
Table 19: Castings Defect data
1
1
0
2
0
2
2
3
1
1
0
2
2
0
1
0
3
0
0
0

2
0
3
3
0
1
0
1
0
2
1
1
1
0
1
1
1
4
1
1

3
0
1
1
0
2
3
1
0
0
0
1
0
1
3
1
0
2
3
1

0
2
0
4
0
2
2
1
1
0
1
1
0
3
2
1
3
1
0
2

2
2
0
1
2
0
1
1
1
0
1
1
1
0
2
2
2
0
0
2

a. Consider the data for the first 80 days and establish a suitable control
procedure for the Phase I analysis.
b. Apply the standard to the last 20 subgroups and interpret the results.
11.17 Table 20 gives the nonconformities (d) observed in the daily inspection of
certain number of disk-drive assemblies (n) . Does the process appear to
be in control?
Table 20: Disk-drive Assembly data
Day
1
2
3
4
5
6
7
8
9
10

n
17
19
17
16
18
19
17
19
18
16

d
13
25
0
7
14
18
10
21
16
3

11.18 Table 21 provides the data on the number of joints welded (n) and number
of nonconforming joints (d).
a. Set up an appropriate control chart procedure and discuss whether
the welding process was in control?
b. Establish the appropriate control limits for future monitoring.
68

Table 21: Welding Quality data
Subgroup
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

n
165
85
65
165
85
161
85
61
103
405
29
33
60
119
61
37
65
49
103
113
107

d
11
7
5
9
5
9
5
1
2
36
2
2
2
3
1
3
1
5
3
3
3

11.19 Suppose that a company is applying a single sampling plan with sample
size 160 and acceptance number 1 for lots of size 100,000.
a. Draw the OC curve of the plan.
b. Find the incoming or submitted quality that will be rejected 90% of
the time.
c. If the AQL is fixed at 0.1% nonconforming, find the probability of
acceptance at AQL.
11.20 Obtain the OC function for the following sampling plan:
Plan
From a large lot, only two units are randomly drawn. If both are conforming, the lot is accepted. If both are nonconforming, the lot is rejected. If
only one unit is conforming, then one more unit is taken from the remainder of the lot. If this unit is conforming, the lot is accepted; otherwise, the
lot is rejected.
11.21 Let there be a single sampling plan with sample size n and acceptance
number Ac. Why not we fix the Acceptance Quality Limit (AQL) as AQL
= (Ac/n)?
11.22 Compare the performance of the single sampling plans (n = 20, Ac = 0),
and (n = 50, Ac =1) using the OC curves. Which plan provides a better
discrimination between good and bad lots? Explain why.
11.23 A sweet corn processing factory is procuring cobs from farmers. The
export specifications require a cob to be at least 18 cm long with no distinct
off-coloured, crushed, dimpled or insect damaged kernels. Consider each
truck load of cobs delivered as a lot for inspection purposes. Assume that
69

30 randomly drawn cobs are inspected and no nonconformity is tolerated.
Draw the OC curve of this plan. If the rejectable quality level is 1%,
compute the consumer’s risk.
11.24 Activities/Experiments/Demonstrations
Funnel Experiment See
Boardman, T. J. and Boardman, E. C. (1990). Don’t Touch That
Funnel! Quality Progress, 23, pp. 65-69.
Arnold , K. J. (2001). The Deck of Cards, Quality Progress, 34, pp.
112-112.
Red Bead Experiment See
Turner, R. (1998). The Red Bead Experiment for Educators, Quality
Progress, 31, pp. 69-74.
M&Ms Experiment See
Ellis, D. R. (2004). The great M&Ms experiment, Quality Progress,
37, p. 104.
DOE activities See
Box, G. E. P. (1992). Teaching engineers experimental design with a
paper helicopter. Quality Engineering, 4, pp. 453459.
Hunter, W. G. (1975) ”101 ways to design an experiment or Some
Ideas About Teaching Design of Experiments”, The University of
Wisconsin-Madison, Technical Report No. 413.
Vandenbrande, W. (2005). Design of Experiments for Dummies,
Quality Progress, 38, pp. 59-65.
Wasiloff, E. and Hargitt, C. (1999). Using DOE to Determine AA
Battery Life, Quality Progress, 32, pp. 67-71.
Sarin, S. (1997). Teaching Taguchi’s Approach to Parameter Design,
Quality Progress, 30, pp. 102-106.
Penny Demonstration See
Schilling, Edward G. (1973). A Penny Demonstration To Show the
Sense of Control Charts, ASQC 27th Annual Technical Conference,
Cleveland, OH.
k1-k2 Card Game See
Burke, R.J., Davis, R. D. and Kaminsky, F. C. (1993). The (k1, k2)
Game, Quality Progress, 26,pp. 49-53

70

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close