Solution Manual For Design and Analysis of Experiments, 10th Edition

Preview Extract
Chapter 2 Supplemental Text Material S2.1. Models for the Data and the t-Test The model presented in the text, equation (2.23) is more properly called a means model. Since the mean is a location parameter, this type of model is also sometimes called a location model. There are other ways to write the model for a t-test. One possibility is yij = ๏ญ + ๏ด i + ๏ฅ ij i = 1,2 R S Tj = 1,2,๏Œ, n i where ๏ญ is a parameter that is common to all observed responses (an overall mean) and ๏ดi is a parameter that is unique to the ith factor level. Sometimes we call ๏ดi the ith treatment effect. This model is usually called the effects model. Since the means model is yij = ๏ญ i + ๏ฅ ij i = 1,2 R S Tj = 1,2,๏Œ, n i we see that the ith treatment or factor level mean is ๏ญ i = ๏ญ + ๏ด i ; that is, the mean response at factor level i is equal to an overall mean plus the effect of the ith factor. We will use both types of models to represent data from designed experiments. Most of the time we will work with effects models, because itโ€™s the โ€œtraditionalโ€ way to present much of this material. However, there are situations where the means model is useful, and even more natural. S2.2. Estimating the Model Parameters Because models arise naturally in examining data from designed experiments, we frequently need to estimate the model parameters. We often use the method of least squares for parameter estimation. This procedure chooses values for the model parameters that minimize the sum of the squares of the errors ๏ฅij. We will illustrate this procedure for the means model. For simplicity, assume that the sample sizes for the two factor levels are equal; that is n1 = n2 = n . The least squares function that must be minimized is 2 n L = ๏ƒฅ ๏ƒฅ ๏ฅ ij2 i =1 j =1 2 n = ๏ƒฅ ๏ƒฅ ( yij โˆ’ ๏ญ i ) 2 i =1 j =1 n n ๏‚ถL ๏‚ถL = 2๏ƒฅ ( y1 j โˆ’๏ญ 1 ) and = 2๏ƒฅ ( y2 j โˆ’๏ญ 2 ) and equating these partial derivatives ๏‚ถ๏ญ 1 ๏‚ถ๏ญ 2 j =1 j =1 to zero yields the least squares normal equations Now n n๏ญ๏€ค 1 = ๏ƒฅ y1 j i =1 n n๏ญ๏€ค 2 = ๏ƒฅ y2 j i =1 The solution to these equations gives the least squares estimators of the factor level means. The solution is ๏ญ๏€ค 1 = y1 and ๏ญ๏€ค 2 = y2 ; that is, the sample averages at leach factor level are the estimators of the factor level means. This result should be intuitive, as we learn early on in basic statistics courses that the sample average usually provides a reasonable estimate of the population mean. However, as we have just seen, this result can be derived easily from a simple location model using least squares. It also turns out that if we assume that the model errors are normally and independently distributed, the sample averages are the maximum likelihood estimators of the factor level means. That is, if the observations are normally distributed, least squares and maximum likelihood produce exactly the same estimators of the factor level means. Maximum likelihood is a more general method of parameter estimation that usually produces parameter estimates that have excellent statistical properties. We can also apply the method of least squares to the effects model. Assuming equal sample sizes, the least squares function is 2 n L = ๏ƒฅ ๏ƒฅ ๏ฅ ij2 i =1 j =1 2 n = ๏ƒฅ ๏ƒฅ ( yij โˆ’ ๏ญ โˆ’ ๏ด i ) 2 i =1 j =1 and the partial derivatives of L with respect to the parameters are 2 n n n ๏‚ถL ๏‚ถL ๏‚ถL = 2๏ƒฅ ๏ƒฅ ( yij โˆ’๏ญ โˆ’ ๏ด i ), = 2๏ƒฅ ( y1 j โˆ’๏ญ โˆ’ ๏ด 1 ),and = 2๏ƒฅ ( y2 j โˆ’๏ญ โˆ’ ๏ด 2 ) ๏‚ถ๏ญ ๏‚ถ๏ด 1 ๏‚ถ๏ด 2 i =1 j =1 j =1 j =1 Equating these partial derivatives to zero results in the following least squares normal equations: 2 n 2n๏ญ๏€ค + n๏ด๏€ค 1 + n๏ด๏€ค 2 = ๏ƒฅ ๏ƒฅ yij i =1 j =1 n๏ญ๏€ค + n๏ด๏€ค 1 n = ๏ƒฅ y1 j j =1 n๏ญ๏€ค n + n๏ด๏€ค 2 = ๏ƒฅ y2 j j =1 Notice that if we add the last two of these normal equations we obtain the first one. That is, the normal equations are not linearly independent and so they do not have a unique solution. This has occurred because the effects model is overparameterized. This situation occurs frequently; that is, the effects model for an experiment will always be an overparameterized model. One way to deal with this problem is to add another linearly independent equation to the normal equations. The most common way to do this is to use the equation ๏ด๏€ค 1 + ๏ด๏€ค 2 = 0 . This is, in a sense, an intuitive choice as it essentially defines the factor effects as deviations from the overall mean ๏ญ. If we impose this constraint, the solution to the normal equations is ๏ญ๏€ค = y ๏ด๏€ค i = yi โˆ’ y , i = 1,2 That is, the overall mean is estimated by the average of all 2n sample observation, while each individual factor effect is estimated by the difference between the sample average for that factor level and the average of all observations. This is not the only possible choice for a linearly independent โ€œconstraintโ€ for solving the normal equations. Another possibility is to simply set the overall mean equal to a constant, such as for example ๏ญ๏€ค = 0 . This results in the solution ๏ญ๏€ค = 0 ๏ด๏€ค i = yi , i = 1,2 Yet another possibility is ๏ด๏€ค 2 = 0 , producing the solution ๏ญ๏€ค = y2 ๏ด๏€ค 1 = y1 โˆ’ y2 ๏ด๏€ค 2 = 0 There are an infinite number of possible constraints that could be used to solve the normal equations. An obvious question is โ€œwhich solution should we use?โ€ It turns out that it really doesnโ€™t matter. For each of the three solutions above (indeed for any solution to the normal equations) we have ๏ญ๏€ค i = ๏ญ๏€ค + ๏ด๏€ค i = yi , i = 1,2 That is, the least squares estimator of the mean of the ith factor level will always be the sample average of the observations at that factor level. So even if we cannot obtain unique estimates for the parameters in the effects model we can obtain unique estimators of a function of these parameters that we are interested in. We say that the mean of the ith factor level is estimable. Any function of the model parameters that can be uniquely estimated regardless of the constraint selected to solve the normal equations is called an estimable function. This is discussed in more detail in Chapter 3. S2.3. A Regression Model Approach to the t-Test The two-sample t-test can be presented from the viewpoint of a simple linear regression model. This is a very instructive way to think about the t-test, as it fits in nicely with the general notion of a factorial experiment with factors at two levels, such as the golf experiment described in Chapter 1. This type of experiment is very important in practice, and is discussed extensively in subsequent chapters. In the t-test scenario, we have a factor x with two levels, which we can arbitrarily call โ€œlowโ€ and โ€œhighโ€. We will use x = -1 to denote the low level of this factor and x = +1 to denote the high level of this factor. The figure below is a scatter plot (from Minitab) of the Portland cement mortar tension bond strength data in Table 2.1 of Chapter 2. Figure 2-3.1 Scatter plot of bond strength 17.50 Bond Strength 17.25 17.00 16.75 16.50 -1.0 -0.5 0.0 Factor level 0.5 1.0 We will a simple linear regression model to this data, say yij = ๏ข 0 + ๏ข 1 xij + ๏ฅ ij where ๏ข 0 and ๏ข 1 are the intercept and slope, respectively, of the regression line and the regressor or predictor variable is x1 j = โˆ’1 and x2 j = +1 . The method of least squares can be used to estimate the slope and intercept in this model. Assuming that we have equal sample sizes n for each factor level the least squares normal equations are: 2 n 2n๏ข๏€ค 0 = ๏ƒฅ ๏ƒฅ yij i =1 j =1 n n j =1 j =1 2n๏ข๏€ค 1 = ๏ƒฅ y2 j โˆ’ ๏ƒฅ y1 j The solution to these equations is ๏ข๏€ค 0 = y 1 2 ๏ข๏€ค 1 = ( y2 โˆ’ y1 ) Note that the least squares estimator of the intercept is the average of all the observations from both samples, while the estimator of the slope is one-half of the difference between the sample averages at the โ€œhighโ€ and โ€œlowโ€™ levels of the factor x. Below is the output from the linear regression procedure in Minitab for the tension bond strength data. Regression Analysis: Bond Strength versus Factor level The regression equation is Bond Strength = 16.9 + 0.139 Factor level Predictor Constant Factor level Coef 16.9030 0.13900 SE Coef 0.0636 0.06356 S = 0.284253 R-Sq = 21.0% T 265.93 2.19 P 0.000 0.042 R-Sq(adj) = 16.6% Analysis of Variance Source Regression Residual Error Total DF 1 18 19 SS 0.38642 1.45440 1.84082 MS 0.38642 0.08080 F 4.78 P 0.042 Notice that the estimate of the slope (given in the column labeled โ€œCoefโ€ and the row 1 1 labeled โ€œFactor levelโ€ above) is 0.139 = ( y2 โˆ’ y1 ) = (17.0420 โˆ’ 16.7640) and the 2 2 estimate of the intercept is 16.9030. Furthermore, notice that the t-statistic associated with the slope is equal to 2.19, exactly the same value (apart from sign) that we gave in the Minitab two-sample t-test output in Table 2.2 in the text. Now in simple linear regression, the t-test on the slope is actually testing the hypotheses H0 : ๏ข 1 = 0 H0 : ๏ข 1 ๏‚น 0 and this is equivalent to testing H0 : ๏ญ 1 = ๏ญ 2 . It is easy to show that the t-test statistic used for testing that the slope equals zero in simple linear regression is identical to the usual two-sample t-test. Recall that to test the above hypotheses in simple linear regression the t-statistic is t0 = ๏ข๏€ค 1 ๏ณ๏€ค 2 S xx 2 n where Sxx = ๏ƒฅ ๏ƒฅ ( xij โˆ’ x ) 2 is the โ€œcorrectedโ€ sum of squares of the xโ€™s. Now in our i =1 j =1 specific problem, x = 0, x1 j = โˆ’1 and x2 j = +1, so S xx = 2n. Therefore, since we have already observed that the estimate of ๏ณ is just Sp, t0 = 1 ( y2 โˆ’ y1 ) y โˆ’ y1 =2 = 2 1 2 ๏ณ๏€ค 2 Sp Sp 2n n S xx ๏ข๏€ค 1 This is the usual two-sample t-test statistic for the case of equal sample sizes. S2.4. Constructing Normal Probability Plots While we usually generate normal probability plots using a computer software program, occasionally we have to construct them by hand. Fortunately, itโ€™s relatively easy to do, since specialized normal probability plotting paper is widely available. This is just graph paper with the vertical (or probability) scale arranged so that if we plot the cumulative normal probabilities (j โ€“ 0.5)/n on that scale versus the rank-ordered observations y(j) a graph equivalent to the computer-generated normal probability plot will result. The table below shows the calculations for the unmodified portland cement mortar bond strength data. j y (j) (j โ€“ 0.5)/10 z(j) 1 16.62 0.05 -1.64 2 16.75 0.15 -1.04 3 16.87 0.25 -0.67 4 16.98 0.35 -0.39 5 17.02 0.45 -0.13 6 17.08 0.55 0.13 7 17.12 0.65 0.39 8 17.27 0.75 0.67 9 17.34 0.85 1.04 10 17.37 0.95 1.64 Now if we plot the cumulative probabilities from the next-to-last column of this table versus the rank-ordered observations from the second column on normal probability paper, we will produce a graph that is identical to the results for the unmodified mortar formulation that is shown in Figure 2.11 in the text. A normal probability plot can also be constructed on ordinary graph paper by plotting the standardized normal z-scores z(j) against the ranked observations, where the standardized normal z-scores are obtained from P( Z ๏‚ฃ z j ) = ๏†( z j ) = j โˆ’ 0.5 n where ๏†(โ€ข) denotes the standard normal cumulative distribution. For example, if (j โ€“ . . The last column of the above 0.5)/n = 0.05, then ๏†( z j ) = 0.05 implies that z j = โˆ’164 table displays the values of the normal z-scores. Plotting these values against the ranked observations on ordinary graph paper will produce a normal probability plot equivalent to the unmodified mortar results in Figure 2.11. As noted in the text, many statistics computer packages present the normal probability plot this way. S2.5. More About Checking Assumptions in the t-Test We noted in the text that a normal probability plot of the observations was an excellent way to check the normality assumption in the t-test. Instead of plotting the observations, an alternative is to plot the residuals from the statistical model. Recall that the means model is yij = ๏ญ i + ๏ฅ ij i = 1,2 R S Tj = 1,2,๏Œ, n i and that the estimates of the parameters (the factor level means) in this model are the sample averages. Therefore, we could say that the fitted model is y๏€คij = yi , i = 1,2 and j = 1,2,๏Œ, ni That is, an estimate of the ijth observation is just the average of the observations in the ith factor level. The difference between the observed value of the response and the predicted (or fitted) value is called a residual, say eij = yij โˆ’ y๏€คi , i = 1,2 . The table below computes the values of the residuals from the portland cement mortar tension bond strength data. y1 j Observation e1 j = y1 j โˆ’ y1 y2 j e2 j = y2 j โˆ’ y2 = y1 j โˆ’ 16.76 j = y2 j โˆ’ 17.04 1 16.85 0.09 16.62 -0.42 2 16.40 -0.36 16.75 -0.29 3 17.21 0.45 17.37 0.33 4 16.35 -0.41 17.12 0.08 5 16.52 -0.24 16.98 -0.06 6 17.04 0.28 16.87 -0.17 7 16.96 0.20 17.34 0.30 8 17.15 0.39 17.02 -0.02 9 16.59 -0.17 17.08 0.04 10 16.57 -0.19 17.27 0.23 The figure below is a normal probability plot of these residuals from Minitab. Normal Probability Plot of the Residuals (response is Bond Strength) 99 95 90 Percent 80 70 60 50 40 30 20 10 5 1 -0.75 -0.50 -0.25 0.00 Residual 0.25 0.50 As noted in section S2.3 above we can compute the t-test statistic using a simple linear regression model approach. Most regression software packages will also compute a table or listing of the residuals from the model. The residuals from the Minitab regression model fit obtained previously are as follows: Obs 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Factor level -1.00 -1.00 -1.00 -1.00 -1.00 -1.00 -1.00 -1.00 -1.00 -1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 Bond Strength 16.8500 16.4000 17.2100 16.3500 16.5200 17.0400 16.9600 17.1500 16.5900 16.5700 16.6200 16.7500 17.3700 17.1200 16.9800 16.8700 17.3400 17.0200 17.0800 17.2700 Fit 16.7640 16.7640 16.7640 16.7640 16.7640 16.7640 16.7640 16.7640 16.7640 16.7640 17.0420 17.0420 17.0420 17.0420 17.0420 17.0420 17.0420 17.0420 17.0420 17.0420 SE Fit 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 0.0899 Residual 0.0860 -0.3640 0.4460 -0.4140 -0.2440 0.2760 0.1960 0.3860 -0.1740 -0.1940 -0.4220 -0.2920 0.3280 0.0780 -0.0620 -0.1720 0.2980 -0.0220 0.0380 0.2280 St Resid 0.32 -1.35 1.65 -1.54 -0.90 1.02 0.73 1.43 -0.65 -0.72 -1.56 -1.08 1.22 0.29 -0.23 -0.64 1.11 -0.08 0.14 0.85 The column labeled โ€œFitโ€ contains the averages of the two samples, computed to four decimal places. The residuals in the sixth column of this table are the same (apart from rounding) as we computed manually. S2.6. Some More Information about the Paired t-Test The paired t-test examines the difference between two variables and test whether the mean of those differences differs from zero. In the text we show that the mean of the differences ๏ญ d is identical to the difference of the means in two independent samples, ๏ญ 1 โˆ’ ๏ญ 2 . However the variance of the differences is not the same as would be observed if there were two independent samples. Let d be the sample average of the differences. Then V (d ) = V ( y1 โˆ’ y2 ) = V ( y1 ) + V ( y2 ) โˆ’ 2Cov ( y1 , y2 ) 2๏ณ 2 (1 โˆ’ ๏ฒ ) = n assuming that both populations have the same variance ๏ณ2 and that ๏ฒ is the correlation between the two random variables y1 and y2 . The quantity Sd2 / n estimates the variance of the average difference d . In many paired experiments a strong positive correlation is expected to exist between y1 and y2 because both factor levels have been applied to the same experimental unit. When there is positive correlation within the pairs, the denominator for the paired t-test will be smaller than the denominator for the two-sample or independent t-test. If the two-sample test is applied incorrectly to paired samples, the procedure will generally understate the significance of the data. Note also that while for convenience we have assumed that both populations have the same variance, the assumption is really unnecessary. The paired t-test is valid when the variances of the two populations are different.

Document Preview (10 of 885 Pages)

User generated content is uploaded by users for the purposes of learning and should be used following SchloarOn's honor code & terms of service.
You are viewing preview pages of the document. Purchase to get full access instantly.

Shop by Category See All


Shopping Cart (0)

Your bag is empty

Don't miss out on great deals! Start shopping or Sign in to view products added.

Shop What's New Sign in