Statistical Process Control Methods

9/11/18, 1’29 PM

Page 2 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

316

13 STATISTICAL QUALITY CONTROL Learning Objectives LO13–1 Illustrate process variation and explain how to measure it. LO13–2 Analyze process quality using statistics. LO13–3 Analyze the quality of batches of items using statistics.

CONTROL CHARTS SHOW YOU VARIATION THAT MATTERS They say “variety is the spice of life,” but when it comes to doing business, variation is not your friend. That’s why we have control charts.

If one of a kind of things is not like the others, is it common cause variation you can accept, or special variation that needs to be addressed? Control charts could tell you.

When you buy a burger from a fast-food restaurant you want consistency, not unpredictability. Now, the pickle on your burger may be closer to the edge of the bun today than it was last week, true—but as long as the pickle is there, it’s acceptable.

Businesses use statistical process control (SPC) to keep processes stable, consistent, and predictable, so they can ensure the quality of products and services. And one of the most common and useful tools in SPC is the control chart.

The control chart shows how a process or output varies over time so you can easily distinguish between “common cause” and “special cause” variations. Identifying the different causes of a variation lets you take action on a process without over-controlling it.

Common cause variation at our burger joint would be pickles being placed on different areas of the buns. We expect that level of variation, and it’s no big deal.

Special cause variation would be a sudden rash of burgers that have 10 pickles instead of 1. Clearly, something unusual is causing unacceptable variation, and it needs to be addressed!

Now you probably wouldn’t need a control chart to detect special cause variation that results in 10 pickles instead of 1 on a burger. But most process variation is much more subtle, and control charts can help you see special cause variation when it isn’t so obvious.

9/11/18, 1’29 PM

Page 3 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

Source: Adapted from Eston Martz, Control Charts Show You Variation That Matters, The Minitab Blog, July 29, 2011.

9/11/18, 1’29 PM

Page 4 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

317

LO13–1 Illustrate process variation and explain how to measure it.

STATISTICAL QUALITY CONTROL This chapter on statistical quality control (SQC) covers the quantitative aspects of quality management. In general, SQC is a number of different techniques designed to evaluate quality from a conformance view; that is, how well are we doing at meeting the specifications that have been set during the design of the parts or services that we are providing? Managing quality performance using SQC techniques usually involves periodic sampling of a process and analysis of these data using statistically derived performance criteria.

Statistical quality control (SQC) A number of different techniques designed to evaluate quality from a conformance view.

As you will see, SQC can be applied to logistics, manufacturing, and service processes. Here are some examples of situations where SQC can be applied:

• How many paint defects are there in the finish of a car? Have we improved our painting process by installing a new sprayer? • How long does it take to execute market orders in our web-based trading system? Has the installation of a new server

improved the service? Does the performance of the system vary over the trading day? • How well are we able to maintain the dimensional tolerance on our three-inch ball bearing assembly? Given the variability of

our process for making this ball bearing, how many defects would we expect to produce per million bearings that we make? • How long does it take for customers to be served from our drive-thru window during the busy lunch period?

Processes that provide goods and services usually exhibit some variation in their output. This variation can be caused by many factors, some of which we can control and others that are inherent in the process. Variation that is caused by factors that can be clearly identified and possibly even managed is called assignable variation. For example, variation caused by workers not being equally trained or by improper machine adjustment is assignable variation. Variation that is inherent in the process itself is called common variation. Common variation is often referred to as random variation and may be the result of the type of equipment used to complete a process, for example.

Assignable variation Deviation in the output of a process that can be clearly identified and managed.

Common variation Deviation in the output of a process that is random and inherent in the process itself.

9/11/18, 1’29 PM

Page 5 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

THE ELMO CHICKEN DANCE TOY GETS A SOUND CHECK AT A MATTEL LAB IN SHENZHEN, CHINA. MATTEL LOBBIED TO LET ITS LABS CERTIFY TOY SAFETY. THE CALIFORNIA COMPANY HAS 10 LABS IN SIX COUNTRIES.

As the title of this section implies, this material requires an understanding of very basic statistics. Recall from your study of statistics involving numbers that are normally distributed

9/11/18, 1’29 PM

Page 6 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

318 the definition of the mean and standard deviation. The mean (¯̄X̄ ) is just the average value of a set of numbers. Mathematically this is

¯̄X̄ = [13.1]

where:

Analytics xi = Observed value n = Total number of observed values

The standard deviation is

σ = √ [13.2] In monitoring a process using SQC, samples of the process output would be taken and sample statistics calculated. The

distribution associated with the samples should exhibit the same kind of variability as the actual distribution of the process, although the actual variance of the sampling distribution would be less. This is good because it allows the quick detection of changes in the actual distribution of the process. The purpose of sampling is to find when the process has changed in some nonrandom way, so that the reason for the change can be quickly determined.

In SQC terminology, sigma (σ) is often used to refer to the sample standard deviation. As you will see in the examples, sigma is calculated in a few different ways, depending on the underlying theoretical distribution (i.e., a normal distribution or a Poisson distribution).

Understanding and Measuring Process Variation It is generally accepted that as variation is reduced, quality is improved. Sometimes that knowledge is intuitive. If a train is always on time, schedules can be planned more precisely. If clothing sizes are consistent, time can be saved by ordering from a catalog. But rarely are such things thought about in terms of the value of low variability. With engineers, the knowledge is better defined. Pistons must fit cylinders, doors must fit openings, electrical components must be compatible, and boxes of cereal must have the right amount of raisins—otherwise quality will be unacceptable and customers will be dissatisfied.

However, engineers also know that it is impossible to have zero variability. For this reason, designers establish specifications that define not only the target value of something but also acceptable limits about the target. For example, if the aim value of a dimension is 10 inches, the design specifications might then be 10.00 inches ± 0.02 inch. This would tell the manufacturing department that, while it should aim for exactly 10 inches, anything between 9.98 and 10.02 inches is OK. These design limits are often referred to as the upper and lower specification limits.

Upper and lower specification limits The range of values in a measure associated with a process that is allowable given the intended use of the product or service.

A traditional way of interpreting such a specification is that any part that falls within the allowed range is equally good, whereas any part falling outside the range is totally bad. This is illustrated in Exhibit 13.1. (Note that the cost is zero over the entire specification range, and then there is a quantum leap in cost once the limit is violated.)

Genichi Taguchi, a noted quality expert from Japan, has pointed out that the traditional view illustrated in Exhibit 13.1 is nonsense for two reasons:

∑n i=1 xi n

∑n i=1 (xi−

¯̄X̄ ) 2

n

9/11/18, 1’29 PM

Page 7 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

1. From the customer’s view, there is often practically no difference between a product just inside specifications and a product just outside. Conversely, there is a far greater difference in the quality of a product that is the target and the quality of one that is near a limit.

2. As customers get more demanding, there is pressure to reduce variability. However, Exhibit 13.1 does not reflect this logic.

9/11/18, 1’29 PM

Page 8 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

319

exhibit 13.1 A Traditional View of the Cost of Variability

exhibit 13.2 Taguchi’s View of the Cost of Variability

Taguchi suggests that a more correct picture of the loss is shown in Exhibit 13.2. Notice that in this graph the cost is represented by a smooth curve. There are dozens of illustrations of this notion: the meshing of gears in a transmission, the speed of photographic film, the temperature in a workplace or department store. In nearly anything that can be measured, the customer sees not a sharp line, but a gradation of acceptability away from the “Aim” specification. Customers see the loss function as Exhibit 13.2 rather than Exhibit 13.1.

Of course, if products are consistently scrapped when they are outside specifications, the loss curve flattens out in most cases at a value equivalent to scrap cost in the ranges outside specifications. This is because such products, theoretically at least, will never be sold so there is no external cost to society. However, in many practical situations, either the process is capable of producing a very high percentage of product within specifications and 100 percent checking is not done, or if the process is not capable of producing within specifications, 100 percent checking is done and out-of-spec products can be reworked to bring them within specs. In any of these situations, the parabolic loss function is usually a reasonable assumption.

Measuring Process Capability

9/11/18, 1’29 PM

Page 9 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

Taguchi argues that being within specification is not a yes/no decision, but rather a continuous function. Motorola quality experts, on the other hand, argue that the process used to produce a good or deliver a service should be so good that the probability of generating a defect should be very, very low. Motorola made process capability and product design famous by adopting Six Sigma limits. When a part is designed, certain dimensions are specified to be within the upper and lower specification limits.

As a simple example, assume engineers are designing a bearing for a rotating shaft—say an axle for the wheel of a car. There are many variables involved for both the bearing and the axle—for example, the width of the bearing, the size of the rollers, the size of the axle,

9/11/18, 1’29 PM

Page 10 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

320 the length of the axle, how it is supported, and so on. The designer specifies limits for each of these variables to ensure that the parts will fit properly. Suppose that initially a design is selected and the diameter of the bearing is set at 1.250 inches ± 0.005 inch. This means that acceptable parts may have a diameter that varies between 1.245 and 1.255 inches (which are the lower and upper specification limits).

Next, consider the process in which the bearing will be made. Consider that many different processes for making the bearing are available. Usually there are trade-offs that need to be considered when designing a process for making a part. The process, for example, might be fast but not consistent, or alternatively it might be slow but consistent. The consistency of a process for making the bearing can be measured by the standard deviation of the diameter measurement. A test can be run by making, say, 100 bearings and measuring the diameter of each bearing in the sample.

After running the test, the average or mean diameter is found to be 1.250 inches. Another way to say this is that the process is “centered” right in the middle of the upper and lower specification limits. In reality it may be difficult to have a perfectly centered process like this example. Consider that the diameter values have a standard deviation or sigma equal to 0.002 inch. What this means is that the process does not make each bearing exactly the same size.

As is discussed later in this chapter, normally a process is monitored using control charts such that if the process starts making bearings that are more than three standard deviations (± 0.006 inch) above or below 1.250 inches, the process is stopped. This means that the process will produce parts that vary between 1.244 (this is 1.250 – 3 × .002) and 1.256 (this is 1.250 + 3 × .002) inches. The 1.244 and 1.256 are referred to as the upper and lower process limits. Be careful and do not get confused with the terminology here. The process limits relate to how consistent the process is for making the bearing. The goal in managing the process is to keep it within plus or minus three standard deviations of the process mean. The specification limits are related to the design of the part. Recall that, from a design view, acceptable parts have a diameter between 1.245 and 1.255 inches (which are the lower and upper specification limits).

KEY IDEA

The main point of this is that the process should be able to make a part well within design specifications. Here we show how statistics are used to evaluate how good a process is.

As can be seen, process limits are slightly greater than the specification limits given by the designer. This is not good since the process will produce some parts that do not meet specifications. Companies with Six Sigma processes insist that a process making a part be capable of operating so that the design specification limits are six standard deviations away from the process mean. For the bearing process, how small would the process standard deviation need to be for it to be Six Sigma capable? Recall that the design specification was 1.250 inches plus or minus 0.005 inch. Consider that the 0.005 inch must relate to the variation in the process. Divide 0.005 inch by 6, which equals 0.00083, to determine the process standard deviation for a Six Sigma process. So for the process to be Six Sigma capable, the mean diameter produced by the process would need to be exactly 1.250 inches and the process standard deviation would need to be less than or equal to 0.00083 inch.

We can imagine that some of you are really confused at this point with the whole idea of Six Sigma. Why doesn’t the company, for example, just check the diameter of each bearing and throw out the ones with a diameter less than 1.245 or greater than 1.255? This could certainly be done and for many, many parts 100 percent testing is done. The problem is for a company that is making thousands of parts each hour, testing each critical dimension of each part made can be very expensive. For the bearing, there could easily be 10 or more additional critical dimensions in addition to the diameter. These would all need to be checked. Using a 100 percent testing approach, the company would spend more time testing than it takes to actually make the part! This is why a company uses small samples to periodically check that the process is in statistical control. We discuss exactly how this statistical sampling works later in the chapter.

We say that a process is capable when the mean and standard deviation of the process are operating such that the upper and

9/11/18, 1’29 PM

Page 11 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

lower control limits are acceptable relative to the upper and lower specification limits. Consider diagram A in Exhibit 13.3. This represents the distribution of the bearing diameter dimension in our original process. The average or mean value is 1.250 and the lower and upper design specifications are 1.245 and 1.255, respectively. Process control limits are plus and minus three standard deviations (1.244 and 1.256). Notice that there is a probability (the yellow areas) of producing defective parts.

9/11/18, 1’29 PM

Page 12 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

321

exhibit 13.3 Process Capability

If the process can be improved by reducing the standard deviation associated with the bearing diameter, the probability of producing defective parts can be reduced. Diagram B in Exhibit 13.3 shows a new process where the standard deviation has been reduced to 0.00083 (the area outlined in green). Even though we cannot see it in the diagram, there is some probability that a defect could be produced by this new process, but that probability is very, very small.

Suppose that the central value or mean of the process shifts away from the mean. Exhibit 13.4 shows the mean shifted one standard deviation closer to the upper specification limit. This, of course, causes a slightly higher number of expected defects, but we can see that this is still very, very good. The capability index is used to measure how well our process is capable of producing relative to the design specifications. A description of how to calculate this index is in the next section.

9/11/18, 1’29 PM

Page 13 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

Capability Index (Cpk) The capability index (Cpk) shows how well the parts being produced fit into the range specified by the design specification limits. If the specification limits are larger than the three sigma allowed in the process, then the mean of the process can be allowed to drift off-center before readjustment, and a high percentage of good parts will still be produced.

Capability index (Cpk) The ratio of the range of values allowed by the design specifications divided by the range of values produced by a process.

Referring to Exhibits 13.3 and 13.4, the capability index (Cpk) is the position of the mean and tails of the process relative to design specifications. The more off-center, the greater the chance to produce defective parts.

Because the process mean can shift in either direction, the direction of shift and its distance from the design specification set the limit on the process capability. The direction of shift is toward the smaller number.

For the Excel template, visit www.mhhe.com/jacobs14e.

9/11/18, 1’29 PM

Page 14 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

322

exhibit 13.4 Process Capability with a Shift in the Process Mean

Analytics Formally stated, the capability index (Cpk) is calculated as the smaller number as follows:

Cpk = min[ or ] [13.3] Working with our example in Exhibit 13.4, let’s assume our process is centered at 1.251 and σ = 0.00083 (σ is the symbol for standard deviation).

Cpk = min[ or ] = min[ or ]

Cpk = min[2.4 or 1.6]

Cpk = 1.6, which is the smaller number. This is a pretty good capability index since few defects will be produced by this process. This tells us that the process mean has shifted to the right similar to that shown in Exhibit 13.4, but parts are still well within

design specification limits. At times it is useful to calculate the actual probability of producing a defect. Assuming that the process is producing with a

consistent standard deviation, this is a fairly straightforward calculation, particularly when we have access to a spreadsheet. The approach to use is to calculate the probability of producing a part outside the lower and upper design specification limits given the mean and standard deviation of the process.

Working with our example, where the process is not centered, with a mean of 1.251 inches, σ = .00083, LSL = 1.245, and USL

¯̄̄ ¯¯̄X̄ −LSL

USL−¯̄̄ ¯¯̄X̄ 3σ

1.251−1.245

3(.00083)

1.255−1.251

3(.00083)

.006 .00249

.004 .00249

9/11/18, 1’29 PM

Page 15 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

= 1.255, we first need to calculate the Z score associated with the upper and lower specification limits. Recall from your study of statistics that the Z score is the standard deviation either to the right or to the left of zero in a probability distribution.

ZLSL = ZUSL =

For our example,

ZLSL = = −7.2289 ZUSL = = 4.8193

An easy way to get the probabilities associated with these Z values is to use the NORMSDIST function built into Excel (you also can use the table in Appendix G). The format for this function is NORMSDIST(Z), where Z is the Z value calculated above. Excel returns the

LSL−¯̄̄ ¯¯̄X̄ σ

USL−¯̄̄ ¯¯̄X̄ σ

1.245−1.251 .00083

1.255−1.251 .00083

9/11/18, 1’29 PM

Page 16 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

323 following values. (We have found that you might get slightly different results from those given here, depending on the version of Excel you are using.)

NORMSDIST(–7.2289) = 2.43461E-13 and NORMSDIST(4.8193) = .99999928

Interpreting this information requires understanding exactly what the NORMSDIST function is providing. NORMSDIST is giving the cumulative probability to the left of the given Z value. Since Z = –7.2289 is the number of standard deviations associated with the lower specification limit, the fraction of parts that will be produced lower than this is 2.43461E-13. This number is in scientific notation and that E-13 at the end means we need to move the decimal over 13 places to get the real fraction defective. So the fraction defective is .00000000000024361, which is a very small number! Similarly, we see that approximately .99999928 of our parts will be below our upper specification limit. What we are really interested in is the fraction that will be above this limit since these are the defective parts. This fraction defective above the upper spec is 1 – .99999928 = .00000072 of our parts.

Adding these two fraction defective numbers together we get .00000072000024361. We can interpret this to mean that we expect only about .72 part per million to be defective. Clearly, this is a great process. You will discover as you work the problems at the end of the chapter that this is not always the case.

EXAMPLE 13.1 The quality assurance manager is assessing the capability of a process that puts pressurized grease in an aerosol can. The design specifications call for an average of 60 pounds per square inch (psi) of pressure in each can with an upper specification limit of 65 psi and a lower specification limit of 55 psi. A sample is taken from production and it is found that the cans average 61 psi with a standard deviation of 2 psi. What is the capability of the process? What is the probability of producing a defect?

For a step-by-step walkthrough of this example, visit www.mhhe.com/jacobs14_sbs_ch13.

SOLUTION Step 1—Interpret the data from the problem.

LSL = 55 USL = 65 X̿ = 61 σ = 2

Step 2—Calculate the Cpk.

Cpk = min[ , ]

Cpk = min[ , ] Cpk = min [1, .6667] = .6667

This is not a very good capability index. We see why this is true in Step 3. Step 3—Calculate the probability of producing a defective can: Probability of a can with less than 55 psi

Z = = = −3

¯̄̄¯¯̄X̄ −LSL

USL−¯̄̄ ¯¯̄X̄ 3σ

61−55 3(2)

65−61 3(2)

X−¯̄̄ ¯¯̄X̄ σ

55−61 2

9/11/18, 1’29 PM

Page 17 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

NORMSDIST(–3) = 0.001349898

Probability of a can with more than 65 psi

Z = = = 2

1 – NORMSDIST(2) = 1 – 0.977249868 = 0.022750132

Probability of a can less than 55 psi or more than 65 psi

Probability = 0.001349898 + 0.022750132 = .024100030

Or approximately 2.4 percent of the cans will be defective.

X−¯̄̄ ¯¯̄X̄ σ

65−61 2

9/11/18, 1’29 PM

Page 18 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

324 The following table is a quick reference for the fraction of defective units for various design specification limits (expressed in

standard deviations). This table assumes that the standard deviation is constant and that the process is centered exactly between the design specification limits.

DESIGN LIMITS Cpk DEFECTIVE PARTS FRACTION DEFECTIVE ±1σ .333 317 per thousand .3173 ±2σ .667 45 per thousand .0455 ±3σ 1.0 2.7 per thousand .0027 ±4σ 1.333 63 per million .000063 ±5σ 1.667 574 per billion .000000574 ±6σ 2.0 2 per billion .000000002

Motorola’s design specification limit of Six Sigma with a shift of the process off the mean by 1.5 σ (Cpk = 1.5) gives 3.4 defects per million. If the mean is exactly in the center (Cpk = 2), then 2 defects per billion are expected, as the table above shows.

LO13–2 Analyze process quality using statistics.

STATISTICAL PROCESS CONTROL PROCEDURES Process control is concerned with monitoring quality while the product or service is being produced. Typical objectives of process control plans are to provide timely information about whether currently produced items are meeting design specifications and to detect shifts in the process that signal that future products may not meet specifications. Statistical process control (SPC) involves testing a random sample of output from a process to determine whether the process is producing items within a preselected range.

Statistical process control (SPC) Techniques for testing a random sample of output from a process to determine whether the process is producing items within a prescribed range.

The examples given so far have all been based on quality characteristics (or variables) that are measurable, such as the diameter or weight of a part. Attributes are quality characteristics that are classified as either conforming or not conforming to specification. Goods or services may be observed to be either good or bad, or functioning or malfunctioning. For example, a lawnmower either runs or it doesn’t; it attains a certain level of torque and horsepower or it doesn’t. This type of measurement is known as sampling by attributes. Alternatively, a lawnmower’s torque and horsepower can be measured as an amount of deviation from a set standard. This type of measurement is known as sampling by variables. The following section describes some standard approaches to controlling processes: first an approach useful for attribute measures and then an approach for variable measures. Both of these techniques result in the construction of control charts. Exhibit 13.5 shows some examples for how control charts can be analyzed to understand how a process is operating.

Attributes Quality characteristics that are classified as either conforming or not conforming to specification.

To view a tutorial on SPC, visit www.mhhe.com/acobs14e_tutorial_ch13.

9/11/18, 1’29 PM

Page 19 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

Process Control with Attribute Measurements: Using p-Charts Measurement by attributes means taking samples and using a single decision—the item is good or it is bad. Because it is a yes or no decision, we can use simple statistics to create a p-chart with an upper process control limit (UCL) and a lower process control limit (LCL). We can draw these control limits on a graph and then plot the fraction defective of each individual sample tested. The process is assumed to be working correctly when the samples, which are taken periodically during the day, continue to stay between the control limits.

p̄ = [13.4]

sp = √ [13.5] UCL = p̄ + zsp [13.6]

LCL = p̄ − zsp or 0 if less than 0 [13.7]

Total number of defective units from all samples

Number of samples × Sample size

p̄(1−p̄) n

9/11/18, 1’29 PM

Page 20 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

325 where p̅ is the fraction defective, sp is the standard deviation, n is the sample size, and z is the number of standard deviations for a specific confidence. Typically, z = 3 (99.7 percent confidence) or z = 2.58 (99 percent confidence) is used.

exhibit 13.5 Process Control Chart Evidence for Investigation

9/11/18, 1’29 PM

Page 21 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

Size of the Sample The size of the sample must be large enough to allow counting of the attribute. For example, if we know that a machine produces 1 percent defective units, then a sample size of five would seldom capture a bad unit. A rule of thumb when setting up a p-chart is to make the sample large enough to expect to count the attribute twice in each sample. So an appropriate sample size if the defective rate were approximately 1 percent would be 200 units. One final note: In the calculations shown in equations 13.4 through 13.7, the assumption is that the sample size is fixed. The calculation of the standard deviation depends on this assumption. If the sample size varies, the standard deviation and upper and lower process control limits should be recalculated for each sample.

9/11/18, 1’29 PM

Page 22 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

326

EXAMPLE 13.2: Process Control Chart Design An insurance company wants to design a control chart to monitor whether insurance claim forms are being completed correctly. The company intends to use the chart to see if improvements in the design of the form are effective. To start the process, the company collected data on the number of incorrectly completed claim forms over the past 10 days. The insurance company processes thousands of these forms each day, and due to the high cost of inspecting each form, only a small representative sample was collected each day. The data and analysis are shown in Exhibit 13.6.

For a step-by-step walkthrough of this example, visit www.mhhe.com/jacobs14e_sbs_ch13.

SOLUTION To construct the control chart, first calculate the overall fraction defective from all samples. This sets the centerline for the control chart.

Analytics

For the Excel template, visit www.mhhe.com/jacobs14e.

Unexpected text node: ‘p’

Next calculate the sample standard deviation:

sp = √ = √ = .00990 Finally, calculate the upper and lower process control limits. A z-value of 3 gives 99.7 percent confidence that the process is within these limits.

UCL = p̄ + 3sp = .03033 + 3 (.00990) = .06003

LCL = p̄ − 3sp = .03033 − 3 (.00990) = .00063

The calculations in Exhibit 13.6, including the control chart, are included in the spreadsheet “SPC.xls.”

exhibit 13.6 Insurance Company Claim Form

SAMPLE NUMBER INSPECTED NUMBER OF FORMS COMPLETED INCORRECTLY FRACTION DEFECTIVE 1 300 10 0.03333 2 300 8 0.02667

p̄(1−p̄) n

.03033(1−.03033)

300

9/11/18, 1’29 PM

Page 23 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

3 300 9 0.03000 4 300 13 0.04333 5 300 7 0.02333 6 300 7 0.02333 7 300 6 0.02000 8 300 11 0.03667 9 300 12 0.04000

10 300 8 0.02667 Totals 3,000 91 0.03033 Sample standard deviation 0.00990

9/11/18, 1’29 PM

Page 24 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

327

Process Control with Attribute Measurements: Using c-Charts In the case of the p-chart, the item was either good or bad. There are times when the product or service can have more than one defect. For example, a board sold at a lumberyard may have multiple knotholes and, depending on the quality grade, may or may not be defective. When it is desired to monitor the number of defects per unit, the c-chart is appropriate.

The underlying distribution for the c-chart is the Poisson, which is based on the assumption that defects occur randomly on each unit. If c is the number of defects for a particular unit, then c̅ is the average number of defects per unit, and the standard deviation is √c̄ . For the purposes of our control chart we use the normal approximation to the Poisson distribution and construct the chart using the following control limits.

Analytics

Unexpected text node: ‘Average’ [13.8]

sp = √c̄ [13.9]

UCL = c̄ + z√c̄ [13.10]

Unexpected text node: ‘LCL’ [13.11]

Just as with the p-chart, typically z = 3 (99.7 percent confidence) or z = 2.58 (99 percent confidence) is used.

EXAMPLE 13.3 The owners of a lumberyard want to design a control chart to monitor the quality of 2 × 4 boards that come from their supplier. For their medium-quality boards they expect an average of four knotholes per 8-foot board. Design a control chart for use by the person receiving the boards using three-sigma (standard deviation) limits.

For a step-by-step walkthrough of this example, visit www.mhhe.com/jacobs14e_sbs_ch13.

SOLUTION For this problem, c̄ = 4, sp = √c̄ = 2

UCL = c̄ + z√c̄ = 4 + 3 (2) = 10

LCL = c̄ − z√c̄ = 4 − 3 (2) = −2 → 0(Zero is used since it is not possible to have a negative number of defects.)

9/11/18, 1’29 PM

Page 25 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

Process Control with Variable Measurements: Using ¯̄X̄ – and R- Charts ¯̄X̄ – and R- (range) charts are widely used in statistical process control.

In attribute sampling, we determine whether something is good or bad, fits or doesn’t fit—it is a go/no-go situation. In variables sampling, however, we measure the actual weight, volume, number of inches, or other variable measurements, and we develop control charts to determine the acceptability or rejection of the process based on those measurements. For example, in attribute sampling, we might decide that if something is over 10 pounds we will reject it and under 10 pounds we will accept it. In variables sampling, we measure a sample and may record weights of 9.8 pounds or 10.2 pounds. These values are used to create or modify control charts and to see whether they fall within the acceptable limits.

There are four main issues to address in creating a control chart: the size of the samples, number of samples, frequency of samples, and control limits.

Variables Quality characteristics that are measured in actual weight, volume, inches, centimeters, or other measure.

Size of Samples For industrial applications in process control involving the measurement of variables, it is preferable to keep the sample size small. There are two main reasons. First, the sample needs to be taken within a reasonable length of time; otherwise, the process might change while the samples are taken. Second, the larger the sample, the more it costs to take.

9/11/18, 1’29 PM

Page 26 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

328

CONTROL CHECK OF CAR AXLE AT DANA CORPORATION RESEARCH AND DEVELOPMENT CENTER.

Sample sizes of four or five units seem to be the preferred numbers. The means of samples of this size have an approximately normal distribution, no matter what the distribution of the parent population looks like. Sample sizes greater than five give narrower process control limits and thus more sensitivity. For detecting finer variations of a process, it may be necessary, in fact, to use larger sample sizes. However, when sample sizes exceed 15 or so, it would be better to use ¯̄X̄ -charts with standard deviation σ rather than ¯̄X̄ -charts with the range R as we use in Example 13.3.

Number of Samples Once the chart has been set up, each sample taken can be compared to the chart and a decision can be made about whether the process is acceptable. To set up the charts, however, prudence and statistics suggest that 25 or so sample sets be analyzed.

Frequency of Samples How often to take a sample is a trade-off between the cost of sampling (along with the cost of the unit if it is destroyed as part of the test) and the benefit of adjusting the system. Usually, it is best to start off with frequent sampling of a process and taper off as confidence in the process builds. For example, one might start with a sample of five units every half hour and end up feeling that one sample per day is adequate.

Control Limits Standard practice in statistical process control for variables is to set control limits three standard deviations above the mean and three standard deviations below. This means that 99.7 percent of the sample means are expected to fall within these process control limits (that is, within a 99.7 percent confidence interval). Thus, if one sample mean falls outside this obviously wide band, we have strong evidence that the process is out of control.

9/11/18, 1’29 PM

Page 27 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

How to Construct ¯̄X̄ – and R- Charts If the standard deviation of the process distribution is known, the X -chart may be defined:

Analytics

UCLx̄ = ¯̄̄ ¯¯̄X̄ + zSx̄ and LCLx̄ = ¯̄̄ ¯¯̄X̄ − zSx̄ [13.12]

where

Sx̄ = s/√n Standard deviation of sample means s = Standard deviation of the process distribution n = Sample size X̿ = Average of sample means or a target value set for the process z = Number of standard deviations for a specific confidence level (typically, z = 3)

An ¯̄X̄ -chart is simply a plot of the means of the samples that were taken from a process. ̄ ¯̄̄̄̄X̄ is the average of the means. In practice, the standard deviation of the process is not known. For this reason, an approach that uses actual sample data is

commonly used. This practical approach is described in the next section. An R-chart is a plot of the average of the range within each sample. The range is the difference between the highest and the

lowest numbers in that sample. R values provide an easily calculated measure of variation used like a standard deviation. R̅ is the average of the range of each sample. More specifically defined, these are

¯̄̄ ¯¯̄X̄ = [Same as 13.1]

where

¯̄X̄ = Mean of the sample i = Item number n = Total number of items in the sample

∑n i=1 Xi

n

9/11/18, 1’29 PM

Page 28 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

329

¯̄̄ ¯¯̄X̄ = [13.13]

where

X̿ = The average of the means of the samples j = Sample number m = Total number of samples

¯̄R̄ = [13.14]

where

Rj = Difference between the highest and lowest measurement in the sample R̅ = Average of the measurement differences R for all samples

E. L. Grant and R. S. Leavenworth computed a table (Exhibit 13.7) that allows us to easily compute the upper and lower control limits for both the ¯̄X̄ -chart and the R-chart.1 These are defined as

Upper control limit for x̅ = x̿ + A2R̅ [13.15]

Lower control limit for x̅ = x̿ – A2R̅ [13.16]

Upper control limit for R = D4R̅ [13.17]

Lower control limit for R = D3R̅ [13.18]

exhibit 13.7 Factor for Determining from R̅ the Three-Sigma Control Limits for ¯̄X̄ – and R-Charts

∑m j=1

¯̄X̄ j

m

∑m j=1 Rj

m

9/11/18, 1’29 PM

Page 29 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

Note: All factors are based on the normal distribution.

For the Excel template, visit www.mhhe.com/jacobs14e.

9/11/18, 1’29 PM

Page 30 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

330

EXAMPLE 13.4: ¯̄X̄ – and R-Charts We would like to create ¯̄X̄ – and R-charts for a process. Exhibit 13.8 shows measurements for all 25 samples. The last two columns show the average of the sample ¯̄X̄ , and the range, R.

Values for A2, D3, and D4. were obtained from Exhibit 13.7.

Upper control limit for ¯̄X̄ = X̿ + A2 R̅ = 10.21 + .58(.60) = 10.56 Lower control limit for ¯̄X̄ = X̿ – A2 R̅ = 10.21 – .58(.60) = 9.86 Upper control limit for R = D4R̅ = 2.11(.60) = 1.27 Lower control limit for R = D3 R̅ = 0(.60) = 0

For a step-by-step walkthrough of this example, visit www.mhhe.com/jacobs14e_sbs_ch13.

SOLUTION Exhibit 13.9 shows the ¯̄X̄ -chart and R-chart with a plot of all the sample means and ranges of the samples. All the points are well within the control limits, although sample 23 is close to the ¯̄X̄ lower control limit and samples 13 through 17 are above the target.

Analytics

exhibit 13.8 Measurements in Samples of Five from a Process

9/11/18, 1’29 PM

Page 31 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

For the Excel template, visit www.mhhe.com/jacobs14e.

For the Excel template, visit www.mhhe.com/jacobs14e.

exhibit 13.9 ¯̄X̄ -Chart and R-Chart

9/11/18, 1’29 PM

Page 32 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

9/11/18, 1’29 PM

Page 33 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

331

LO13–3 Analyze the quality of batches of items using statistics.

ACCEPTANCE SAMPLING

Design of a Single Sampling Plan for Attributes Acceptance sampling is performed on goods that already exist to determine what percentage of products conform to specifications. These products may be items received from another company and evaluated by the receiving department, or they may be components that have passed through a processing step and are evaluated by company personnel either in production or later in the warehousing function. Whether inspection should be done at all is addressed in the following example.

Acceptance sampling is executed through a sampling plan. In this section, we illustrate the planning procedures for a single sampling plan—that is, a plan in which the quality is determined from the evaluation of one sample. (Other plans may be developed using two or more samples. See J. M. Juran and F. M. Gryna’s Quality Planning and Analysis for a discussion of these plans.)

EXAMPLE 13.5: Costs to Justify Inspection Total (100 percent) inspection is justified when the cost of a loss incurred by not inspecting is greater than the cost of inspection. For example, suppose a faulty item results in a $10 loss and the average percentage of defective items in the lot is 3 percent.

For a step-by-step walkthrough of this example, visit www.mhhe.com/jacobs14e_sbs_ch13.

SOLUTION If the average percentage of defective items in a lot is 3 percent, the expected cost of faulty items is 0.03 × $10, or $0.30 each. Therefore, if the cost of inspecting each item is less than $0.30, the economic decision is to perform 100 percent inspection. Not all defective items will be removed, however, because inspectors will pass some bad items and reject some good ones.

The purpose of a sampling plan is to test the lot to either (1) find its quality or (2) ensure that the quality is what it is supposed to be. Thus, if a quality control supervisor already knows the quality (such as the 0.03 given in the example), he or she does not sample for defects. Either all of them must be inspected to remove the defects or none of them should be inspected, and the rejects pass into the process. The choice simply depends on the cost to inspect and the cost incurred by passing a reject.

Analytics

9/11/18, 1’29 PM

Page 34 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

A single sampling plan is defined by n and c, where n is the number of units in the sample and c is the acceptance number. The size of n may vary from one up to all the items in the lot (usually denoted as N) from which it is drawn. The acceptance number c denotes the maximum number of defective items that can be found in the sample before the lot is rejected. Values for n and c are determined by the interaction of four factors (AQL, α, LTPD, and β) that quantify the objectives of the product’s producer and its consumer. The objective of the producer is to ensure that the sampling plan has a low probability of rejecting good lots. Lots are defined as high quality if they contain no more than a specified level of defectives, termed the acceptable quality level (AQL).2 The objective of the consumer is to ensure that the sampling plan has a low probability of accepting bad lots. Lots are defined as low quality if the percentage of defectives is greater than a specified amount, termed lot tolerance percent defective (LTPD). The probability associated with rejecting a high-quality lot is denoted by the Greek letter alpha (α) and is termed the producer’s risk. The probability associated with accepting a low-quality lot is denoted by the letter beta (β) and is termed the consumer’s risk. The selection of particular values for AQL, α, LTPD, and β is an economic decision based on a cost trade-off or, more typically, on company policy or contractual requirements.

There is a humorous story supposedly about Hewlett-Packard during its first dealings with Japanese vendors, who place great emphasis on high-quality

9/11/18, 1’29 PM

Page 35 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

332 production. HP had insisted on 2 percent AQL in a purchase of 100 cables. During the purchase negotiations, some heated discussion took place wherein the Japanese vendor did not want this AQL specification; HP insisted that it would not budge from the 2 percent AQL. The Japanese vendor finally agreed. Later, when the box arrived, there were two packages inside. One contained 100 good cables. The other package had 2 cables with a note stating: “We have sent you 100 good cables. Since you insisted on 2 percent AQL, we have enclosed 2 defective cables in this package, though we do not understand why you want them.”

The following example, using an excerpt from a standard acceptance sampling table, illustrates how the four parameters—AQL, α, LTPD, and β—are used in developing a sampling plan.

EXAMPLE 13.6: Values of n and c Hi-Tech Industries manufactures Z-Band radar scanners used to detect speed traps. The printed circuit boards in the scanners are purchased from an outside vendor. The vendor produces the boards to an AQL of 2 percent defectives and is willing to run a 5 percent risk (α) of having lots of this level or fewer defectives rejected. Hi-Tech considers lots of 8 percent or more defectives (LTPD) unacceptable and wants to ensure that it will accept such poor quality lots no more than 10 percent of the time (β). A large shipment has just been delivered. What values of n and c should be selected to determine the quality of this lot?

For a step-by-step walkthrough of this example, visit www.mhhe.com/jacobs14e_sbs_ch13.

Analytics

SOLUTION The parameters of the problem are AQL = 0.02, α = 0.05, LTPD = 0.08, and β = 0.10. We can use Exhibit 13.10 to find c and n.

First, divide LTPD by AQL (0.08/0.02 = 4). Then, find the ratio in column 2 that is equal to or just greater than that amount (4). This value is 4.057, which is associated with c = 4.

Finally, find the value in column 3 that is in the same row as c = 4, and divide that quantity by AQL to obtain n (1.970/0.02 = 98.5).

The appropriate sampling plan is c = 4, n = 99. Ninety-nine scanners will be inspected and if more than 4 defective units are found, the lot will be rejected.

exhibit 13.10 Excerpt from a Sampling Plan Table for α = 0.05, β = 0.10

9/11/18, 1’29 PM

Page 36 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

Operating Characteristic Curves While a sampling plan such as the one just described meets our requirements for the extreme values of good and bad quality, we cannot readily determine how well the plan discriminates between good and bad lots at intermediate values. For this reason, sampling plans are generally displayed graphically through the use of operating characteristic (OC) curves. These curves, which are unique for each combination of n and c, simply illustrate the probability of accepting lots with varying percentages of defectives. The procedure we have followed in

9/11/18, 1’29 PM

Page 37 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

333 developing the plan, in fact, specifies two points on an OC curve: one point defined by AQL and 1 – α and the other point defined by LTPD and β. Curves for common values of n and c can be computed or obtained from available tables.3

exhibit 13.11 Operating Characteristic Curve for AQL = 0.02, α = 0.05, LTPD = 0.08, β = 0.10

Shaping the OC Curve A sampling plan discriminating perfectly between good and bad lots has an infinite slope (vertical) at the selected value of AQL. In Exhibit 13.11, any percentage defective to the left of 2 percent would always be accepted, and those to the right, always rejected. However, such a curve is possible only with complete inspections of all units and thus is not a possibility with a true sampling plan.

An OC curve should be steep in the region of most interest (between the AQL and the LTPD), which is accomplished by varying n and c. If c remains constant, increasing the sample size n causes the OC curve to be more vertical. While holding n constant, decreasing c (the maximum number of defective units) also makes the slope more vertical, moving closer to the origin.

The Effects of Lot Size The size of the lot that the sample is taken from has relatively little effect on the quality of protection. Consider, for example, that samples—all of the same size of 20 units—are taken from different lots ranging from a lot size of 200 units to a lot size of infinity. If each lot is known to have 5 percent defectives, the probability of accepting the lot based on the sample of 20 units ranges from about 0.34 to about 0.36. This means that as long as the lot size is several times the sample size, it makes little difference how large the lot is. It seems a bit difficult to accept, but statistically (on the average in the long run) whether

9/11/18, 1’29 PM

Page 38 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

we have a carload or box full, we’ll get about the same answer. It just seems that a carload should have a larger sample size. Of course, this assumes that the lot is randomly chosen and that defects are randomly spread through the lot.

9/11/18, 1’29 PM

Page 39 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

334

CONCEPT CONNECTIONS

LO13-1 Illustrate process variation and explain how to measure it.

Summary This chapter covers the quantitative aspect of quality management. Variation is inherent in all processes and can be caused by many factors. Variation caused by identifiable factors is called assignable variation and can possibly be managed. Variation inherent in a process is called common or random variation. Statistical quality control (SQC) involves sampling output from a process and using statistics to find when the process has changed in a nonrandom way. When a product or service is designed, specification limits are assigned relative to critical parameters. The process is designed to work so that probability of output being outside these limits is relatively low. The capability index of a process measures its ability to consistently produce within the specification limits.

Key Terms Statistical quality control (SQC) Common variation Capability index Assignable variation Upper and lower specification limits

Key Formulas Mean or average

[13.1] ¯̄X̄ = Standard deviation

[13.2] σ =

⎷ Capability index

[13.3] Cpk = min[ , ]

LO13-2 Analyze process quality using statistics.

Summary Statistical process control involves monitoring the quality of a process as it is operating. Control charts are used to visually monitor the status of a process over time. Attributes are characteristics that can be evaluated as either conforming or not conforming to the design specifications. Control charts useful for attribute characteristics are the p-chart and the c-chart. When the characteristic is measured as a variable measure, for example, weight or diameter, ¯̄X̄ – and R-charts are used.

Key Terms

n ∑ i=1

xi

n

n ∑ i=1

(xi−¯̄X̄ )2

n

¯̄̄ ¯¯̄X̄ −LSL

USL−¯̄̄ ¯¯̄X̄ 3σ

9/11/18, 1’29 PM

Page 40 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

Statistical process control (SPC) Attributes Variables

Key Formulas Process control charts using attribute measurements

[13.4] Unexpected text node: ‘number’

[13.5] sp = √ [13.6] UCL = p̄ + zsp

[13.7] LCL = p̄ − zsp or 0 if less than 0

[13.8] c̄ = Average number of defects per unit

[13.9] sp = √c̄

[13.10] UCL = c̄ + z√c̄

[13.11] LCL = c̄ + z√c̄ or 0 if less than 0

Process control X – and R-charts

[13.12] UCLx̄ = ¯̄̄ ¯¯̄X̄ + zSx̄ and LCLx̄ = ¯̄̄ ¯¯̄X̄ − zSx̄

[13.13] ¯̄̄ ¯¯̄X̄ =

[13.14] ¯̄R̄ =

p̄(1−p̄) n

m ∑ j=1

¯̄X̄ j

m

m ∑ j=1

Rj

m

9/11/18, 1’29 PM

Page 41 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

335

[13.15] Unexpected text node: ‘A’

[13.16] Lower control lim it for ¯̄X̄ = ¯̄X̄ − A2¯̄R̄

[13.17] Upper control lim it for R = D4¯̄R̄

[13.18] Lower control lim it for R = D3¯̄R̄

LO13-3 Analyze the quality of batches of items using statistics.

Summary Acceptance sampling is used to evaluate if a batch of parts, as received in an order for example, conforms to specification limits. This is useful in the area where material is received from suppliers. An acceptance sampling plan is defined by a sample size and the number of acceptable defects in the sample. Since the sampling plan is defined using statistics, there is the possibility that a bad lot will be accepted, which is the consumer’s risk, and that a good lot will be rejected, which is the producer’s risk.

Solved Problems LO13–1 SOLVED PROBLEM 1 HVAC Manufacturing produces parts and materials for the heating, ventilation, and air conditioning industry. One of its facilities produces metal ductwork in various sizes for the home construction market. One particular product is 6-inch diameter round metal ducting. It is a simple product, but the diameter of the finished ducting is critical. If it is too small or large, contractors will have difficulty fitting the ducting into other parts of the system. The target diameter is 6 inches exactly, with an acceptable tolerance of 6.03 inches. Anything produced outside of specifications is considered defective. The line supervisor for this product has data showing that the actual diameter of finished product is 5.99 inches with a standard deviation of .01 inches.

a. What is the current capability index of this process? What is the probability of producing a defective unit in this process?

b. The line supervisor thinks he will be able to adjust the process so that the mean diameter of output is the same as the target diameter, without any change in the process variation.What would the capability index be if he is successful? What would be the probability of producing a defective unit in this adjusted process?

c. Through better training of employees and investment in equipment upgrades, the company could produce output with a mean diameter equal to the target and a standard deviation of .005 inches. What would the capability index be if this were to happen? What would be the probability of producing a defective unit in this case?

Solution a. ¯̄̄ ¯¯̄X̄ = 5.99 LSL = 6.00 − .03 = 5.97 USL = 6.00 + .03 = 6.03 σ = .01

Cpk = min[ or ] = min[.667 or 1.333] = 0.667 This process is not what would be considered capable. The capability index is based on the LSL, showing that the process mean is lower than the target. To find the probability of a defective unit, we need to find the Z-scores of the LCL and USL with respect to the current process:

5.99−5.97 .03

6.03−5.99 .03

¯̄̄¯

9/11/18, 1’29 PM

Page 42 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

ZLSL = = = −2.00 NorMSDIST (−2.00) = .02275 LSL−¯̄̄ ¯¯̄X̄

σ 5.97−5.99

.01

9/11/18, 1’29 PM

Page 43 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

336 2.275 percent of output will be too small.

ZUSL = = = 4.00 NorMSDIST (4.00) = .999968

The probability of too large a unit is 1 – .999968 = .000032, so .0032 percent of output will be too small. The probability of producing a defective unit is .02275 + .000032 = .022782, so 2.2782 percent of output will be defective. As a numerical example, 22,782 out of every million units will be defective.

b. X̿ = 6.00 LSL = 6.00 – .03 = 5.97 USL = 6.00 + .03 = 6.03 σ = .01

Cpk = min[ or ] = min [1.00 or 1.00] = 1.00 ZLSL = = = − 3.00 NorMSDIST (−3.00) = .00135

Only 0.135 percent of output will be too small.

ZUSL = = = 3.00 NorMSDIST (3.00) = .99865

The probability of too large a unit is 1 − .99865 = .00135, so 0.135 percent of output will be too large. The probability of producing a defective unit is .00135 + .00135 = .0027, so 0.27 percent of output will be defective. As a numerical example, 2,700 out of every million units will be defective. That’s about a 90 percent reduction in defective output just from adjusting the process mean! Because the process is exactly centered on the target and the specification limits are three standard deviations away from the process mean, this adjusted process has a Cpk = 1.00. In order to do any better than that, we would need to reduce the variation in the process, as shown in part (c).

c. X̿ = 6.00 LSL = 6.00 – .03 = 5.97 USL = 6.00 + .03 = 6.03 σ = .005

Cpk = min [ or ] = min[2.00 or 2.00] = 2.00 We have doubled the process capability index by cutting the standard deviation of the process in half. What will be the effect on the probability of defective output?

ZLSL = = = − 6.00 NorMSDIST (−6.00) = 0.0000000009866

ZUSL = = = 6.00 NorMSDIST (6.00) = 0.9999999990134

Following earlier logic, the probability of producing a defective unit in this case is just 0.000000001973, a very small probability indeed! Using the earlier numerical example, this would result in only .001973 defective units out of every million. By cutting the process standard deviation in half, we could gain far more than a 50 percent reduction in defective output—in this case essentially eliminating defective units due to the diameter of the ducting. This example demonstrates the power and importance of Six Sigma quality concepts.

USL − ¯̄̄ ¯¯̄X̄ σ

6.03 − 5.99 .01

6.00 − 5.97 .03

6.03 − 6.00 .03

LSL − ¯̄̄ ¯¯̄X̄ σ

5.97 − 6.00 .01

USL − ¯̄̄ ¯¯̄X̄ σ

6.03 − 6.00 .01

6.00 − 5.97 .015

6.03 − 6.00 .015

LSL − ¯̄̄ ¯¯̄X̄ σ

5.97 − 6.00 .005

USL − ¯̄̄ ¯¯̄X̄ σ

6.03 − 6.00 .005

9/11/18, 1’29 PM

Page 44 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

337

LO13–2 SOLVED PROBLEM 2 Completed forms from a particular department of an insurance company were sampled daily to check the performance quality of that department. To establish a tentative norm for the department, one sample of 100 units was collected each day for 15 days, with these results:

For the Excel template, visit www.mhhe.com/jacobs14e.

SAMPLE SAMPLE SIZE NUMBER OF FORMS WITH ERRORS SAMPLE SAMPLE SIZE NUMBER OF FORMS WITH ERRORS 1 100 4 9 100 4 2 100 3 10 100 2 3 100 5 11 100 7 4 100 0 12 100 2 5 100 2 13 100 1 6 100 8 14 100 3 7 100 1 15 100 1 8 100 3

a. Develop a p-chart using a 95 percent confidence interval (z = 1.96). b. Plot the 15 samples collected. c. What comments can you make about the process?

Solution a. p̄ = = .0307

Sp = √ = √ = √.0003 = .017 UCL = p̅ + 1.96sp = .031 + 1.96(.017) = .064

LCL = p̅ – 1.96sp = .031 – 1.96(.017) = −.00232 or zero b. The defectives are plotted below.

c. Of the 15 samples, 2 were out of the control limits. Because the control limits were established as 95 percent, or 1 out

46 15(100)

p̄(1 − p̄) n

.0307(1 − .0307)

100

9/11/18, 1’29 PM

Page 45 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

of 20, we would say that the process is out of control. It needs to be examined to find the cause of such widespread variation.

LO13–3 SOLVED PROBLEM 3 Management is trying to decide whether Part A, which is produced with a consistent 3 percent defective rate, should be inspected. If it is not inspected, the 3 percent defectives will go through a product assembly phase and have to be replaced later. If all Part A’s are inspected, one-third of the defectives will be found, thus raising the quality to 2 percent defectives.

a. Should the inspection be done if the cost of inspecting is $0.01 per unit and the cost of replacing a defective in the final assembly is $4.00?

b. Suppose the cost of inspecting is $0.05 per unit rather than $0.01. Would this change your answer in (a)?

9/11/18, 1’29 PM

Page 46 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

338

Solution Should Part A be inspected?

.03 defective with no inspection.

.02 defective with inspection. a. This problem can be solved simply by looking at the opportunity for 1 percent improvement.

Benefit = .01($4.00) = $0.04 Cost of inspection = $0.01 Therefore, inspect and save $0.03 per unit.

b. A cost of $0.05 per unit to inspect would be $0.01 greater than the savings, so inspection should not be performed.

Discussion Questions LO13–1

1. The capability index allows for some drifting of the process mean. Discuss what this means in terms of product quality output.

2. In an agreement between a supplier and a customer, the supplier must ensure that all parts are within specification before shipment to the customer. What is the effect on the cost of quality to the customer?

3. In the situation described in question 2, what would be the effect on the cost of quality to the supplier?

LO13–2 4. Discuss the purposes of and differences between p-charts and ¯̄X̄ and R-charts. 5. The application of control charts is straightforward in manufacturing processes when you have tangible goods with

physical characteristics you can easily measure on a numerical scale. Quality control is also important in service businesses, but you are generally not going to want to measure the physical characteristics of your customers! Do you think control charts have a place in service businesses? Discuss how you might apply them to specific examples.

LO13–3 6. Discuss the trade-off between achieving a zero AQL (acceptable quality level) and a positive AQL (such as an AQL

of 2 percent). 7. The cost of performing inspection sampling moves inversely to the cost of quality failures. We can reduce the cost of

quality failures by increased levels of inspection, but that of course would increase the cost of inspection. Can you think of any methods to reduce the cost of quality failures without increasing a company’s cost of inspection? Think specifically in terms of material purchased from vendors.

Objective Questions L13–1

1. A company currently using an inspection process in its material receiving department is trying to install an overall cost reduction program. One possible reduction is the elimination of one inspection position. This position tests material that has a defective content on the average of 0.04. By inspecting all items, the inspector is able to remove

9/11/18, 1’29 PM

Page 47 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

all defects. The inspector can inspect 50 units per hour. The hourly rate including fringe benefits for this position is $9. If the inspection position is eliminated, defects will go into product

9/11/18, 1’29 PM

Page 48 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

339 assembly and will have to be replaced later at a cost of $10 each when they are detected in final product testing. a. Should this inspection position be eliminated? b. What is the cost to inspect each unit? c. Is there benefit (or loss) from the current inspection process? How much?

2. A metal fabricator produces connecting rods with an outer diameter that has a 1 ± .01 inch specification. A machine operator takes several sample measurements over time and determines the sample mean outer diameter to be 1.002 inches with a standard deviation of .003 inch. a. Calculate the process capability index for this example. b. What does this figure tell you about the process?

3. Output from a process contains 0.02 defective unit. Defective units that go undetected into final assemblies cost $25 each to replace. An inspection process, which would detect and remove all defectives, can be established to test these units. However, the inspector, who can test 20 units per hour, is paid $8 per hour, including fringe benefits. Should an inspection station be established to test all units? a. What is the cost to inspect each unit? b. What is the benefit (or loss) from the inspection process?

4. There is a 3 percent error rate at a specific point in a production process. If an inspector is placed at this point, all the errors can be detected and eliminated. However, the inspector is paid $8 per hour and can inspect units in the process at the rate of 30 per hour.

If no inspector is used and defects are allowed to pass this point, there is a cost of $10 per unit to correct the defect later on.

Should an inspector be hired? 5. Design specifications require that a key dimension on a product measure 100 ± 10 units. A process being considered

for producing this product has a standard deviation of four units. a. What can you say (quantitatively) regarding the process capability? b. Suppose the process average shifts to 92. Calculate the new process capability. c. What can you say about the process after the shift? Approximately what percentage of the items produced will be

defective? 6. C-Spec, Inc., is attempting to determine whether an existing machine is capable of milling an engine part that has a

key specification of 4 ± .003 inches. After a trial run on this machine, C-Spec has determined that the machine has a sample mean of 4.001 inches with a standard deviation of .002 inch. a. Calculate the Cpk for this machine. b. Should C-Spec use this machine to produce this part? Why?

LO13–2 7. Ten samples of 15 parts each were taken from an ongoing process to establish a p-chart for control. The samples and

the number of defectives in each are shown in the following table:

SAMPLE n NUMBER OF DEFECTIVE ITEMS IN THE SAMPLE SAMPLE n NUMBER OF DEFECTIVE ITEMS IN THE SAMPLE 1 15 3 6 15 2 2 15 1 7 15 0 3 15 0 8 15 3 4 15 0 9 15 1 5 15 0 10 15 0

a. Develop a p-chart for 95 percent confidence (1.96 standard deviation). b. Based on the plotted data points, what comments can you make?

9/11/18, 1’29 PM

Page 49 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

340 8. A shirt manufacturer buys cloth by the 100-yard roll from a supplier. For setting up a control chart to manage the

irregularities (e.g., loose threads and tears), the following data were collected from a sample provided by the supplier.

SAMPLE 1 2 3 4 5 6 7 8 9 10 IRREGULARITIES 3 5 2 6 5 4 6 3 4 5

a. Using these data, set up a c-chart with z = 2. b. Suppose the next five rolls from the supplier had three, two, five, three, and seven irregularities. Is the supplier

process under control? 9. Resistors for electronic circuits are manufactured on a high-speed automated machine. The machine is set up to

produce a large run of resistors of 1,000 ohms each. To set up the machine and to create a control chart to be used throughout the run, 15 samples were taken with four

resistors in each sample. The complete list of samples and their measured values are as follows:

SAMPLE NUMBER READINGS (IN OHMS) 1 1010 991 985 986 2 995 996 1009 994 3 990 1003 1015 1008 4 1015 1020 1009 998 5 1013 1019 1005 993 6 994 1001 994 1005 7 989 992 982 1020 8 1001 986 996 996 9 1006 989 1005 1007 10 992 1007 1006 979 11 996 1006 997 989 12 1019 996 991 1011 13 981 991 989 1003 14 999 993 988 984 15 1013 1002 1005 992

Develop an ¯̄X̄ -chart and an R-chart and plot the values. From the charts, what comments can you make about the process? (Use three-sigma control limits as in Exhibit 13.7.)

10. You are the newly appointed assistant administrator at a local hospital, and your first project is to investigate the quality of the patient meals put out by the food-service department. You conducted a 10-day survey by submitting a simple questionnaire to the 400 patients with each meal, asking that they simply check off that the meal was either satisfactory or unsatisfactory. For simplicity in this problem, assume that the response was 1,000 returned questionnaires from the 1,200 meals each day. The results are as follows:

NUMBER OF UNSATISFACTORY MEALS SAMPLE SIZE

December 1 74 1,000 December 2 42 1,000 December 3 64 1,000 December 4 80 1,000 December 5 40 1,000 December 6 50 1,000 December 7 65 1,000 December 8 70 1,000 December 9 40 1,000 December 10 75 1,000

9/11/18, 1’29 PM

Page 50 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

600 10,000

a. Construct a p-chart based on the questionnaire results, using a confidence interval of 95.5 percent, which is two standard deviations.

b. What comments can you make about the results of the survey? 11. The state and local police departments are trying to analyze crime rates so they can shift their patrols from

decreasing-rate areas to areas where rates are increasing. The city and county have been geographically segmented into areas containing 5,000 residences. The police recognize

9/11/18, 1’29 PM

Page 51 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

341 that not all crimes and offenses are reported: People do not want to become involved, consider the offenses too small to report, are too embarrassed to make a police report, or do not take the time, among other reasons. Every month, because of this, the police are contacting by phone a random sample of 1,000 of the 5,000 residences for data on crime. (Respondents are guaranteed anonymity.) Here are the data collected for the past 12 months for one area:

MONTH CRIME INCIDENCE SAMPLE SIZE CRIME RATE January 7 1,000 0.007 February 9 1,000 0.009 March 7 1,000 0.007 April 7 1,000 0.007 May 7 1,000 0.007 June 9 1,000 0.009 July 7 1,000 0.007 August 10 1,000 0.010 September 8 1,000 0.008 October 11 1,000 0.011 November 10 1,000 0.010 December 8 1,000 0.008

Construct a p-chart for 95 percent confidence (1.96) and plot each of the months. If the next three months show crime incidences in this area as

January = 10 (out of 1,000 sampled)

February = 12 (out of 1,000 sampled)

March = 11 (out of 1,000 sampled)

what comments can you make regarding the crime rate? 12. Some citizens complained to city council members that there should be equal protection under the law against the

occurrence of crimes. The citizens argued that this equal protection should be interpreted as indicating that high- crime areas should have more police protection than low-crime areas. Therefore, police patrols and other methods for preventing crime (such as street lighting or cleaning up abandoned areas and buildings) should be used proportionately to crime occurrence.

In a fashion similar to problem 11, the city has been broken down into 20 geographic areas, each containing 5,000 residences. The 1,000 sampled from each area showed the following incidence of crime during the past month:

AREA NUMBER OF CRIMES SAMPLE SIZE CRIME RATE 1 14 1,000 0.014 2 3 1,000 0.003 3 19 1,000 0.019 4 18 1,000 0.018 5 14 1,000 0.014 6 28 1,000 0.028 7 10 1,000 0.010 8 18 1,000 0.018 9 12 1,000 0.012 10 3 1,000 0.003 11 20 1,000 0.020 12 15 1,000 0.015

9/11/18, 1’29 PM

Page 52 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

13 12 1,000 0.012 14 14 1,000 0.014 15 10 1,000 0.010 16 30 1,000 0.030 17 4 1,000 0.004 18 20 1,000 0.020 19 6 1,000 0.006 20 30 1,000 0.030

300

Suggest a reallocation of crime protection effort, if indicated, based on a p-chart analysis. To be reasonably certain in your recommendation, select a 95 percent confidence level (that is, Z = 1.96).

9/11/18, 1’29 PM

Page 53 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

342 13. The following table contains the measurements of the key length dimension from a fuel injector. These samples of

size five were taken at one-hour intervals.

OBSERVATIONS SAMPLE NUMBER 1 2 3 4 5

1 .486 .499 .493 .511 .481 2 .499 .506 .516 .494 .529 3 .496 .500 .515 .488 .521 4 .495 .506 .483 .487 .489 5 .472 .502 .526 .469 .481 6 .473 .495 .507 .493 .506 7 .495 .512 .490 .471 .504 8 .525 .501 .498 .474 .485 9 .497 .501 .517 .506 .516 10 .495 .505 .516 .511 .497 11 .495 .482 .468 .492 .492 12 .483 .459 .526 .506 .522 13 .521 .512 .493 .525 .510 14 .487 .521 .507 .501 .500 15 .493 .516 .499 .511 .513 16 .473 .506 .479 .480 .523 17 .477 .485 .513 .484 .496 18 .515 .493 .493 .485 .475 19 .511 .536 .486 .497 .491 20 .509 .490 .470 .504 .512

Construct a three-sigma ¯̄X̄ -chart and R-chart (use Exhibit 13.7) for the length of the fuel injector. What can you say about this process?

LO13–3 14. In the past, Alpha Corporation has not performed incoming quality control inspections but has taken the word of its

vendors. However, Alpha has been having some unsatisfactory experience recently with the quality of purchased items and wants to set up sampling plans for the receiving department to use.

For a particular component, X, Alpha has a lot tolerance percentage defective of 10 percent. Zenon Corporation, from which Alpha purchases this component, has an acceptable quality level in its production facility of 3 percent for component X. Alpha has a consumer’s risk of 10 percent and Zenon has a producer’s risk of 5 percent. a. When a shipment of product X is received from Zenon Corporation, what sample size should the receiving

department test? b. What is the allowable number of defects in order to accept the shipment?

15. Large-scale integrated (LSI) circuit chips are made in one department of an electronics firm. These chips are incorporated into analog devices that are then encased in epoxy. The yield is not particularly good for LSI manufacture, so the AQL specified by that department is 0.15 while the LTPD acceptable by the assembly department is 0.40. a. Develop a sampling plan. b. Explain what the sampling plan means; that is, how would you tell someone to do the test?

9/11/18, 1’29 PM

Page 54 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

Case : Hot Shot Plastics Company Plastic keychains are being produced in a company named Hot Shot Plastics. The plastic material is first molded and then trimmed to the required shape. The curetimes (which is the time for the plastic to cool) during the molding process affect the edge quality of the keychains produced. The aim is to achieve statistical control of the curetimes using ¯̄X̄ and R-charts.

Curetime data of 25 samples, each of size four, have been taken when the process is assumed to be in control. These are shown below. (Note: The spreadsheet “Hot Shot Plastics.xls” has these data.)

9/11/18, 1’29 PM

Page 55 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

343

SAMPLE NO. OBSERVATIONS MEAN RANGE 1 27.34667 27.50085 29.94412 28.21249 28.25103 2.59745 2 27.79695 26.15006 31.21295 31.33272 29.12317 5.18266 3 33.53255 29.32971 29.70460 31.05300 30.90497 4.20284 4 37.98409 32.26942 31.91741 29.44279 32.90343 8.54130 5 33.82722 30.32543 28.38117 33.70124 31.55877 5.44605 6 29.68356 29.56677 27.23077 34.00417 30.12132 6.77340 7 32.62640 26.32030 32.07892 36.17198 31.79940 9.85168 8 30.29575 30.52868 24.43315 26.85241 28.02750 6.09553 9 28.43856 30.48251 32.43083 30.76162 30.52838 3.99227 10 28.27790 33.94916 30.47406 28.87447 30.39390 5.67126 11 26.91885 27.66133 31.46936 29.66928 28.92971 4.55051 12 28.46547 28.29937 28.99441 31.14511 29.22609 2.84574 13 32.42677 26.10410 29.47718 37.20079 31.30221 11.09669 14 28.84273 30.51801 32.23614 30.47104 30.51698 3.39341 15 30.75136 32.99922 28.08452 26.19981 29.50873 6.79941 16 31.25754 24.29473 35.46477 28.41126 29.85708 11.17004 17 31.24921 28.57954 35.00865 31.23591 31.51833 6.42911 18 31.41554 35.80049 33.60909 27.82131 32.16161 7.97918 19 32.20230 32.02005 32.71018 29.37620 31.57718 3.33398 20 26.91603 29.77775 33.92696 33.78366 31.10110 7.01093 21 35.05322 32.93284 31.51641 27.73615 31.80966 7.31707 22 32.12483 29.32853 30.99709 31.39641 30.96172 2.79630 23 30.09172 32.43938 27.84725 30.70726 30.27140 4.59213 24 30.04835 27.23709 22.01801 28.69624 26.99992 8.03034 25 29.30273 30.83735 30.82735 31.90733 30.71869 2.60460

Means 30.40289 5.932155

Questions 1 Prepare ¯̄X̄ and R-charts using these data with the method described in the chapter. 2 Analyze the charts and comment on whether the process appears to be in control and stable. 3 Twelve additional samples of curetime data from the molding process were collected from an actual production run.

The data from these new samples are shown below. Update your control charts and compare the results with the previous data. The ¯̄X̄ and R-charts are drawn with the new data using the same control limits established before. Comment on what the new charts show.

SAMPLE NO. OBSERVATIONS MEAN RANGE 1 31.65830 29.78330 31.87910 33.91250 31.80830 4.12920 2 34.46430 25.18480 37.76689 39.21143 34.15686 14.02663 3 41.34268 39.54590 29.55710 32.57350 35.75480 11.78558 4 29.47310 25.37840 25.04380 24.00350 25.97470 5.46960 5 25.46710 34.85160 30.19150 31.62220 30.53310 9.38450 6 46.25184 34.71356 41.41277 44.63319 41.75284 11.53828 7 35.44750 38.83289 33.08860 31.63490 34.75097 7.19799 8 34.55143 33.86330 35.18869 42.31515 36.47964 8.45185 9 43.43549 37.36371 38.85718 39.25132 39.72693 6.07178 10 37.05298 42.47056 35.90282 38.21905 38.41135 6.56774 11 38.57292 39.06772 32.22090 33.20200 35.76589 6.84682

9/11/18, 1’29 PM

Page 56 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

12 27.03050 33.63970 26.63060 42.79176 32.52314 16.16116

Case : Quality Management—Toyota Quality Control Analytics at Toyota As part of the process for improving the quality of their cars, Toyota engineers have identified a potential improvement to the process that makes a washer that is used in the accelerator assembly. The tolerances on the thickness of the washer are fairly large since the fit can be loose, but if it

9/11/18, 1’29 PM

Page 57 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

344 does happen to get too large, it can cause the accelerator to bind and create a potential problem for the driver. (Note: This part of the case has been fabricated for teaching purposes, and none of these data were obtained from Toyota.)

Let’s assume that, as a first step to improving the process, a sample of 40 washers coming from the machine that produces the washers was taken and the thickness measured in millimeters. The following table has the measurements from the sample:

1.9 2.0 1.9 1.8 2.2 1.7 2.0 1.9 1.7 1.8 1.8 2.2 2.1 2.2 1.9 1.8 2.1 1.6 1.8 1.6 2.1 2.4 2.2 2.1 2.1 2.0 1.8 1.7 1.9 1.9 2.1 2.0 2.4 1.7 2.2 2.0 1.6 2.0 2.1 2.2

Questions 1 If the specification is such that no washer should be greater than 2.4 millimeters, assuming that the thicknesses are

distributed normally, what fraction of the output is expected to be greater than this thickness? 2 If there are an upper and lower specification, where the upper thickness limit is 2.4 and the lower thickness limit is

1.4, what fraction of the output is expected to be out of tolerance? 3 What is the Cpk for the process? 4 What would be the Cpk for the process if it were centered between the specification limits (assume the process

standard deviation is the same)? 5 What percentage of output would be expected to be out of tolerance if the process were centered? 6 Set up X- and range control charts for the current process. Assume the operators will take samples of 10 washers at a

time. 7 Plot the data on your control charts. Does the current process appear to be in control? 8 If the process could be improved so that the standard deviation were only about .10 millimeter, what would be the best

that could be expected with the processes relative to fraction defective?

Practice Exam 1. A Six Sigma process that is running at the center of its control limits would expect this defect rate. 2. Variation that can be clearly identified and possibly managed. 3. Variation inherent in the process itself. 4. If a process has a capability index of 1 and is running normally (centered on the mean), what percentage of the units

would one expect to be defective? 5. An alternative to viewing an item as simply good or bad due to it falling in or out of the tolerance range. 6. Quality characteristics that are classified as either conforming or not conforming to specification. 7. A quality characteristic that is actually measured, such as the weight of an item. 8. A quality chart suitable for when an item is either good or bad. 9. A quality chart suitable for when a number of blemishes are expected on each unit, such as a spool of yarn.

10. Useful for checking quality when we periodically purchase large quantities of an item and it would be very costly to check each unit individually.

11. A chart that depicts the manufacturer’s and consumer’s risks associated with a sampling plan.

1. Two parts per billion units 2. Assignable variation 3. Common variation 4. Design limits are at ±3σ or 2.7 defects per thousand 5.

Taguchi loss function 6. Attributes 7. Variable 8. p-chart 9. c-chart 10. Acceptance sampling 11. Operating characteristic curve

9/11/18, 1’29 PM

Page 58 of 58https://jigsaw.vitalsource.com/api/v0/books/1259850137/print?from=316&to=346

Selected Bibliography Evans, James R., and William M. Lindsay. The Management and Control of Quality, 8th ed. Cincinnati: South-Western College

Publications, 2010.

Rath & Strong. Rath & Strong’s Six Sigma Pocket Guide. Rath & Strong, Inc., 2003.

Ryan, Thomas P., Statistical Methods for Quality Improvement. New York: Wiley Series in Probability and Statistics, 2011.

Small, B. B. (with committee). Statistical Quality Control Handbook. Western Electric Co., Inc., 1956.

Footnotes 1. E. L. Grant and R. S. Leavenworth, Statistical Quality Control (New York: McGraw-Hill, 1996). Copyright © 1996 McGraw-Hill

Companies, Inc. Used with permission. 2. There is some controversy surrounding AQLs. This is based on the argument that specifying some acceptable percentage of

defectives is inconsistent with the philosophical goal of zero defects. In practice, even in the best QC companies, there is an acceptable quality level. The difference is that it may be stated in parts per million rather than in parts per hundred. This is the case in Motorola’s Six Sigma quality standard, which holds that no more than 3.4 defects per million parts are acceptable.

3. See, for example, H. F. Dodge and H. G. Romig, Sampling Inspection Tables—Single and Double Sampling (New York: John Wiley & Sons, 1959); and Military Standard Sampling Procedures and Tables for Inspection by Attributes (MIL-STD-105D) (Washington, DC: U.S. Government Printing Office, 1983).