Solution Manual for Linear Algebra, 6th Edition

Preview Extract
2.1 – Matrix Operations Notes: The definition here of a matrix product AB gives the proper view of AB for nearly all matrix calculations. (The dual fact about the rows of A and the rows of AB is seldom needed, mainly because vectors here are usually written as columns.) I assign Exercise 13 and most of Exercises 25โ€“30 to reinforce the definition of AB. Exercises 31 and 32 are used in the proof of the Invertible Matrix Theorem, in Section 2.3. Exercises 31โ€“33 are mentioned in a footnote in Section 2.2. A class discussion of the solutions of Exercises 31โ€“33 can provide a transition to Section 2.2. Or, these exercises could be assigned after starting Section 2.2. Exercises 35 and 36 are optional, but they are mentioned in Example 4 of Section 2.4. Outer products in the spectral decomposition of a symmetric matrix, in Section 7.1. Exercises 37โ€“41 provide good training for mathematics majors. When I talk with my colleagues in Engineering, the first thing they tell me is that they wish students in their classes could multiply matrices. Exercises 49โ€“52 provide simple examples of where multiplication is used in high-tech applications. ๏ƒฉ2 ๏ƒซ4 1. โˆ’2 A = (โˆ’2) ๏ƒช 0 โˆ’3 โˆ’1๏ƒน ๏ƒฉโˆ’4 = 2๏ƒบ๏ƒป ๏ƒช๏ƒซ โˆ’8 0 6 2๏ƒน . Next, use B โ€“ 2A = B + (โ€“2A): โˆ’4๏ƒบ๏ƒป 1๏ƒน ๏ƒฉโˆ’4 0 2๏ƒน ๏ƒฉ 3 โˆ’5 3๏ƒน ๏ƒฉ7 โˆ’5 B โˆ’ 2A = ๏ƒช +๏ƒช =๏ƒช ๏ƒบ ๏ƒบ 2 โˆ’7๏ƒบ๏ƒป ๏ƒซ 1 โˆ’4 โˆ’3๏ƒป ๏ƒซ โˆ’8 6 โˆ’4๏ƒป ๏ƒซโˆ’7 The product AC is not defined because the number of columns of A does not match the number of 1โ‹… 5 + 2 โ‹… 4๏ƒน ๏ƒฉ 1 13๏ƒน ๏ƒฉ 1 2๏ƒน ๏ƒฉ 3 5๏ƒน ๏ƒฉ 1โ‹… 3 + 2(โˆ’1) =๏ƒช rows of C. CD = ๏ƒช ๏ƒบ ๏ƒช ๏ƒบ ๏ƒบ=๏ƒช ๏ƒบ . For mental ๏ƒซโˆ’2 1๏ƒป ๏ƒซโˆ’1 4๏ƒป ๏ƒซโˆ’2 โ‹… 3 + 1(โˆ’1) โˆ’2 โ‹… 5 + 1โ‹… 4๏ƒป ๏ƒซโˆ’7 โˆ’6๏ƒป computation, the row-column rule is probably easier to use than the definition. ๏ƒฉ2 ๏ƒซ4 2. A + 2B = ๏ƒช 0 โˆ’3 โˆ’1๏ƒน ๏ƒฉ7 โˆ’5 + 2๏ƒช ๏ƒบ 2๏ƒป ๏ƒซ 1 โˆ’4 1๏ƒน ๏ƒฉ2 + 14 = โˆ’3๏ƒบ๏ƒป ๏ƒช๏ƒซ 4 + 2 0 โˆ’ 10 โˆ’3 โˆ’ 8 โˆ’1 + 2๏ƒน ๏ƒฉ16 = 2 โˆ’ 6๏ƒบ๏ƒป ๏ƒช๏ƒซ 6 โˆ’10 1๏ƒน โˆ’11 โˆ’4๏ƒบ๏ƒป The expression 3C โ€“ E is not defined because 3C has 2 columns and โ€“E has only 1 column. ๏ƒฉ 1 2๏ƒน ๏ƒฉ7 โˆ’5 CB = ๏ƒช ๏ƒบ๏ƒช ๏ƒซโˆ’2 1๏ƒป ๏ƒซ 1 โˆ’4 1๏ƒน ๏ƒฉ 1โ‹… 7 + 2 โ‹…1 1(โˆ’5) + 2(โˆ’4) = โˆ’3๏ƒบ๏ƒป ๏ƒช๏ƒซโˆ’2 โ‹… 7 + 1โ‹…1 โˆ’2(โˆ’5) + 1(โˆ’4) 1โ‹…1 + 2(โˆ’3) ๏ƒน ๏ƒฉ 9 = โˆ’2 โ‹…1 + 1(โˆ’3) ๏ƒบ๏ƒป ๏ƒช๏ƒซโˆ’13 2-1 Copyright ยฉ 2021 Pearson Education, Inc. โˆ’13 6 โˆ’5๏ƒน โˆ’5๏ƒบ๏ƒป 2-2 Chapter 2 ๏ฌ Matrix Algebra The product EB is not defined because the number of columns of E does not match the number of rows of B. ๏ƒฉ3 ๏ƒซ0 3. 3I 2 โˆ’ A = ๏ƒช โˆ’1๏ƒน ๏ƒฉ3 โˆ’ 4 = โˆ’2๏ƒบ๏ƒป ๏ƒช๏ƒซ0 โˆ’ 5 0๏ƒน ๏ƒฉ 4 โˆ’ 3๏ƒบ๏ƒป ๏ƒช๏ƒซ 5 โˆ’1๏ƒน ๏ƒฉ12 = โˆ’2๏ƒบ๏ƒป ๏ƒช๏ƒซ15 ๏ƒฉ4 (3I 2 ) A = 3( I 2 A) = 3 ๏ƒช ๏ƒซ5 ๏ƒฉ3 (3I 2 ) A = ๏ƒช ๏ƒซ0 0๏ƒน ๏ƒฉ 4 3๏ƒบ๏ƒป ๏ƒช๏ƒซ 5 ๏ƒฉ 9 ๏ƒช 4. A โˆ’ 5I 3 = ๏ƒช โˆ’8 ๏ƒช๏ƒซ โˆ’4 โˆ’1 0 ๏ƒฉ 9 (5I3 ) A = 5( I 3 A) = 5 A = 5 ๏ƒช๏ƒช โˆ’8 ๏ƒช๏ƒซ โˆ’4 โˆ’1 ๏ƒฉ5 0 0๏ƒน ๏ƒฉ 9 (5I3 ) A = ๏ƒช๏ƒช0 5 0๏ƒบ๏ƒบ ๏ƒช๏ƒช โˆ’8 ๏ƒช๏ƒซ0 0 5๏ƒบ๏ƒป ๏ƒช๏ƒซ โˆ’4 15๏ƒน ๏ƒฉ 45 โˆ’5 ๏ƒช = ๏ƒช โˆ’40 35 โˆ’15๏ƒบ๏ƒบ ๏ƒช๏ƒซ โˆ’20 5 40๏ƒบ๏ƒป ๏ƒฉ โˆ’1 ๏ƒช 5. a. Ab 1 = ๏ƒช 5 ๏ƒช๏ƒซ 2 AB = [ Ab1 ๏ƒฉ โˆ’1 ๏ƒช 5 b. ๏ƒช ๏ƒช๏ƒซ 2 ๏ƒฉ 4 ๏ƒช 6. a. Ab1 = ๏ƒช โˆ’3 ๏ƒช๏ƒซ 3 3๏ƒน ๏ƒฉ 45 โˆ’3๏ƒบ๏ƒบ = ๏ƒช๏ƒช โˆ’40 8๏ƒบ๏ƒป ๏ƒช๏ƒซ โˆ’20 7 1 โˆ’1 2 1 โˆ’5 35 5 3๏ƒน ๏ƒฉ 5 โ‹… 9 + 0 + 0 โˆ’3๏ƒบ๏ƒบ = ๏ƒช๏ƒช0 + 5(โˆ’8) + 0 8๏ƒบ๏ƒป ๏ƒช๏ƒซ0 + 0 + 5(โˆ’4) โˆ’1 ๏ƒฉ โˆ’7 Ab 2 ] = ๏ƒช๏ƒช 7 ๏ƒช๏ƒซ 12 3(โˆ’1) + 0๏ƒน ๏ƒฉ12 = 0 + 3(โˆ’2) ๏ƒบ๏ƒป ๏ƒช๏ƒซ15 0๏ƒน ๏ƒฉ 4 0๏ƒบ๏ƒบ = ๏ƒช๏ƒช โˆ’8 5๏ƒบ๏ƒป ๏ƒช๏ƒซ โˆ’4 5 0 7 1 2๏ƒน ๏ƒฉ โˆ’7 ๏ƒน ๏ƒฉ 3๏ƒน ๏ƒช ๏ƒบ ๏ƒบ 4๏ƒบ ๏ƒช ๏ƒบ = ๏ƒช 7 ๏ƒบ , โˆ’2 โˆ’3๏ƒบ๏ƒป ๏ƒซ ๏ƒป ๏ƒช๏ƒซ 12๏ƒบ๏ƒป 2๏ƒน ๏ƒฉ 3 4๏ƒบ๏ƒบ ๏ƒช โˆ’2 โˆ’3๏ƒบ๏ƒป ๏ƒซ โˆ’3๏ƒน , or โˆ’6๏ƒบ๏ƒป โˆ’1๏ƒน ๏ƒฉ3 โ‹… 4 + 0 = โˆ’2๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 + 3 โ‹… 5 3๏ƒน ๏ƒฉ 5 โˆ’3๏ƒบ๏ƒบ โˆ’ ๏ƒช๏ƒช0 8๏ƒบ๏ƒป ๏ƒช๏ƒซ0 7 1 0 โˆ’ (โˆ’1) ๏ƒน ๏ƒฉ โˆ’1 1๏ƒน = 3 โˆ’ (โˆ’2) ๏ƒบ๏ƒป ๏ƒช๏ƒซโˆ’5 5๏ƒบ๏ƒป ๏ƒฉ โˆ’1 Ab 2 = ๏ƒช๏ƒช 5 ๏ƒช๏ƒซ 2 โˆ’3๏ƒน โˆ’6๏ƒบ๏ƒป 3๏ƒน โˆ’3๏ƒบ๏ƒบ 3๏ƒบ๏ƒป 15๏ƒน โˆ’15๏ƒบ๏ƒบ , or 40๏ƒบ๏ƒป 5(โˆ’1) + 0 + 0 5 โ‹… 3 + 0 + 0๏ƒน 0 + 5 โ‹… 7 + 0 0 + 5(โˆ’3) + 0๏ƒบ๏ƒบ 0 + 0 + 5 โ‹…1 0 + 0 + 5 โ‹… 8๏ƒบ๏ƒป 2๏ƒน ๏ƒฉ 6๏ƒน ๏ƒฉ โˆ’4๏ƒน ๏ƒช ๏ƒบ 4๏ƒบ ๏ƒช ๏ƒบ = ๏ƒช โˆ’16 ๏ƒบ๏ƒบ 1 โˆ’3๏ƒบ๏ƒป ๏ƒซ ๏ƒป ๏ƒช๏ƒซ โˆ’11๏ƒบ๏ƒป 6๏ƒน โˆ’16 ๏ƒบ๏ƒบ โˆ’11๏ƒบ๏ƒป ๏ƒฉ โˆ’1 โ‹… 3 + 2(โˆ’2) โˆ’4๏ƒน ๏ƒช = 5 โ‹… 3 + 4(โˆ’2) 1๏ƒป๏ƒบ ๏ƒช ๏ƒช๏ƒซ 2 โ‹… 3 โˆ’ 3(โˆ’2) โˆ’2๏ƒน ๏ƒฉ โˆ’4๏ƒน ๏ƒฉ 1๏ƒน ๏ƒช ๏ƒบ ๏ƒบ 0๏ƒบ ๏ƒช ๏ƒบ = ๏ƒช โˆ’3๏ƒบ , 4 5๏ƒบ๏ƒป ๏ƒซ ๏ƒป ๏ƒช๏ƒซ 23๏ƒบ๏ƒป โˆ’1(โˆ’4) + 2 โ‹…1๏ƒน ๏ƒฉ โˆ’7 5(โˆ’4) + 4 โ‹…1๏ƒบ๏ƒบ = ๏ƒช๏ƒช 7 2(โˆ’4) โˆ’ 3 โ‹…1๏ƒบ๏ƒป ๏ƒช๏ƒซ 12 ๏ƒฉ 4 Ab 2 = ๏ƒช๏ƒช โˆ’3 ๏ƒช๏ƒซ 3 โˆ’2๏ƒน ๏ƒฉ 14 ๏ƒน ๏ƒฉ 3๏ƒน ๏ƒช ๏ƒบ ๏ƒบ 0๏ƒบ ๏ƒช ๏ƒบ = ๏ƒช โˆ’9 ๏ƒบ โˆ’1 5๏ƒบ๏ƒป ๏ƒซ ๏ƒป ๏ƒช๏ƒซ 4 ๏ƒบ๏ƒป Copyright ยฉ 2021 Pearson Education, Inc. 6๏ƒน โˆ’16๏ƒบ๏ƒบ โˆ’11๏ƒบ๏ƒป 2.1 – Matrix Operations ๏ƒฉ โˆ’4 Ab 2 ] = ๏ƒช๏ƒช โˆ’3 ๏ƒซ๏ƒช 23 AB = [ Ab1 ๏ƒฉ 4 ๏ƒช b. ๏ƒช โˆ’3 ๏ƒซ๏ƒช 3 โˆ’2๏ƒน ๏ƒฉ1 0๏ƒบ๏ƒบ ๏ƒช 4 5๏ƒป๏ƒบ ๏ƒซ 2-3 14๏ƒน โˆ’9๏ƒบ๏ƒบ 4๏ƒป๏ƒบ ๏ƒฉ 4 โ‹…1 โˆ’ 2 โ‹… 4 3๏ƒน ๏ƒช = โˆ’3 โ‹…1 + 0 โ‹… 4 โˆ’1๏ƒป๏ƒบ ๏ƒช ๏ƒซ๏ƒช 3 โ‹…1 + 5 โ‹… 4 4 โ‹… 3 โˆ’ 2(โˆ’1) ๏ƒน ๏ƒฉ โˆ’4 โˆ’3 โ‹… 3 + 0(โˆ’1) ๏ƒบ๏ƒบ = ๏ƒช๏ƒช โˆ’3 3 โ‹… 3 + 5(โˆ’1) ๏ƒป๏ƒบ ๏ƒซ๏ƒช 23 14๏ƒน โˆ’9๏ƒบ๏ƒบ 4๏ƒป๏ƒบ 7. Since A has 3 columns, B must match with 3 rows. Otherwise, AB is undefined. Since AB has 7 columns, so does B. Thus, B is 3ร—7. 8. The number of rows of B matches the number of rows of BC, so B has 3 rows. ๏ƒฉ 2 ๏ƒซโˆ’3 5๏ƒน ๏ƒฉ4 1๏ƒบ๏ƒป ๏ƒช๏ƒซ 3 9. AB = ๏ƒช โˆ’5๏ƒน ๏ƒฉ 23 = k ๏ƒบ๏ƒป ๏ƒช๏ƒซโˆ’9 โˆ’10 + 5k ๏ƒน ๏ƒฉ4 , while BA = ๏ƒช ๏ƒบ 15 + k ๏ƒป ๏ƒซ3 โˆ’5๏ƒน ๏ƒฉ 2 k ๏ƒบ๏ƒป ๏ƒช๏ƒซโˆ’3 5๏ƒน ๏ƒฉ 23 = 1๏ƒบ๏ƒป ๏ƒช๏ƒซ6 โˆ’ 3k 15 ๏ƒน . 15 + k ๏ƒบ๏ƒป Then AB = BA if and only if โ€“10 + 5k = 15 and โ€“9 = 6 โ€“ 3k, which happens if and only if k = 5. ๏ƒฉ 2 ๏ƒซโˆ’4 10. AB = ๏ƒช ๏ƒฉ1 ๏ƒช 11. AD = ๏ƒช1 ๏ƒช๏ƒซ1 ๏ƒฉ2 DA = ๏ƒช๏ƒช 0 ๏ƒช๏ƒซ 0 โˆ’3๏ƒน ๏ƒฉ8 6๏ƒบ๏ƒป ๏ƒช๏ƒซ5 1 2 4 0 3 0 4๏ƒน ๏ƒฉ 1 โˆ’7 ๏ƒน ๏ƒฉ 2 =๏ƒช , AC = ๏ƒช ๏ƒบ ๏ƒบ 5๏ƒป ๏ƒซโˆ’2 14๏ƒป ๏ƒซโˆ’4 1๏ƒน ๏ƒฉ 2 3๏ƒบ๏ƒบ ๏ƒช๏ƒช 0 5๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 0 0๏ƒน ๏ƒฉ1 0๏ƒบ๏ƒบ ๏ƒช๏ƒช1 5๏ƒบ๏ƒป ๏ƒช๏ƒซ1 1 3 0 2 4 0๏ƒน ๏ƒฉ 2 3 ๏ƒบ ๏ƒช 0๏ƒบ = ๏ƒช 2 6 5๏ƒบ๏ƒป ๏ƒช๏ƒซ 2 12 5๏ƒน 15๏ƒบ๏ƒบ 25๏ƒบ๏ƒป 1๏ƒน ๏ƒฉ 2 3๏ƒบ๏ƒบ = ๏ƒช๏ƒช 3 5๏ƒบ๏ƒป ๏ƒช๏ƒซ 5 2๏ƒน 9๏ƒบ๏ƒบ 25๏ƒบ๏ƒป 2 6 20 โˆ’3๏ƒน ๏ƒฉ5 6๏ƒบ๏ƒป ๏ƒช๏ƒซ3 โˆ’2๏ƒน ๏ƒฉ 1 โˆ’7๏ƒน = 1๏ƒบ๏ƒป ๏ƒช๏ƒซโˆ’2 14๏ƒบ๏ƒป Right-multiplication (that is, multiplication on the right) by the diagonal matrix D multiplies each column of A by the corresponding diagonal entry of D. Left-multiplication by D multiplies each row of A by the corresponding diagonal entry of D. To make AB = BA, one can take B to be a multiple of I3. For instance, if B = 4I3, then AB and BA are both the same as 4A. 12. Consider B = [b1 b2]. To make AB = 0, one needs Ab1 = 0 and Ab2 = 0. By inspection of A, a suitable ๏ƒฉ 2๏ƒน ๏ƒซ 1๏ƒป ๏ƒฉ 2๏ƒน ๏ƒซ 1๏ƒป ๏ƒฉ2 6๏ƒน ๏ƒบ. ๏ƒซ 1 3๏ƒป b1 is ๏ƒช ๏ƒบ , or any multiple of ๏ƒช ๏ƒบ . Example: B = ๏ƒช 13. Use the definition of AB written in reverse order: [Ab1 โ‹… โ‹… โ‹… Abp] = A[b1 โ‹… โ‹… โ‹… bp]. Thus [Qr1 โ‹… โ‹… โ‹… Qrp] = QR, when R = [r1 โ‹… โ‹… โ‹… rp]. 14. By definition, UQ = U[q1 โ‹… โ‹… โ‹… q4] = [Uq1 โ‹… โ‹… โ‹… Uq4]. From Example 6 of Section 1.8, the vector Uq1 lists the total costs (material, labor, and overhead) corresponding to the amounts of products B and C specified in the vector q1. That is, the first column of UQ lists the total costs for materials, labor, and overhead used to manufacture products B and C during the first quarter of the year. Columns 2, 3, Copyright ยฉ 2021 Pearson Education, Inc. 2-4 Chapter 2 ๏ฌ Matrix Algebra and 4 of UQ list the total amounts spent to manufacture B and C during the 2nd, 3rd, and 4th quarters, respectively. 15. False. See the definition of AB. 16. False. AB must be a 3ร—3 matrix, but the formula for AB implies that it is 3ร—1. The plus signs should be just spaces (between columns). This is a common mistake. 17. False. The roles of A and B should be reversed in the second half of the statement. See the box after Example 3. 18. True. See the box after Example 6. 19. True. See Theorem 2(b), read right to left. 20. True. See Theorem 3(b), read right to left. 21. False. The left-to-right order of B and C cannot be changed, in general. 22. False. See Theorem 3(d). 23. False. The phrase โ€œin the same orderโ€ should be โ€œin the reverse order.โ€ See the box after Theorem 3. 24. True. This general statement follows from Theorem 3(b). 2 ๏ƒฉโˆ’1 ๏ƒซ 6 โˆ’9 25. Since ๏ƒช โˆ’1๏ƒน = AB = [ Ab1 3๏ƒบ๏ƒป Ab2 Ab3 ] , the first column of B satisfies the equation ๏ƒฉ โˆ’1๏ƒน ๏ƒฉ 1 โˆ’2 โˆ’1๏ƒน ๏ƒฉ 1 0 Ax = ๏ƒช ๏ƒบ . Row reduction: [ A Ab1 ] ~ ๏ƒช ~ 5 6๏ƒบ๏ƒป ๏ƒช๏ƒซ0 1 ๏ƒซ 6๏ƒป ๏ƒซโˆ’2 2 ๏ƒน ๏ƒฉ 1 0 โˆ’ 8๏ƒน ๏ƒฉ 1 โˆ’2 ๏ƒฉโˆ’8๏ƒน [ A Ab2 ] ~ ๏ƒชโˆ’2 5 โˆ’9๏ƒบ ~ ๏ƒช0 1 โˆ’5๏ƒบ and b2 = ๏ƒชโˆ’5๏ƒบ . ๏ƒซ ๏ƒป ๏ƒซ ๏ƒป ๏ƒซ ๏ƒป 7๏ƒน ๏ƒฉ7 ๏ƒน . So b1 = ๏ƒช ๏ƒบ . Similarly, ๏ƒบ 4๏ƒป ๏ƒซ4๏ƒป Note: An alternative solution of Exercise 25 is to row reduce [A Ab1 Ab2] with one sequence of row operations. This observation can prepare the way for the inversion algorithm in Section 2.2. 26. The first two columns of AB are Ab1 and Ab2. They are equal since b1 and b2 are equal. 27. (A solution is in the text). Write B = [b1 b2 b3]. By definition, the third column of AB is Ab3. By hypothesis, b3 = b1 + b2. So Ab3 = A(b1 + b2) = Ab1 + Ab2, by a property of matrix-vector multiplication. Thus, the third column of AB is the sum of the first two columns of AB. 28. The second column of AB is also all zeros because Ab2 = A0 = 0. 29. Let bp be the last column of B. By hypothesis, the last column of AB is zero. Thus, Abp = 0. However, bp is not the zero vector, because B has no column of zeros. Thus, the equation Abp = 0 is a linear dependence relation among the columns of A, and so the columns of A are linearly dependent. Note: The text answer for Exercise 29 is, โ€œThe columns of A are linearly dependent. Why?โ€ The Study Guide supplies the argument above in case a student needs help. Copyright ยฉ 2021 Pearson Education, Inc. 2.1 – Matrix Operations 2-5 30. If the columns of B are linearly dependent, then there exists a nonzero vector x such that Bx = 0. From this, A(Bx) = A0 and (AB)x = 0 (by associativity). Since x is nonzero, the columns of AB must be linearly dependent. 31. If x satisfies Ax = 0, then CAx = C0 = 0 and so Inx = 0 and x = 0. This shows that the equation Ax = 0 has no free variables. So every variable is a basic variable and every column of A is a pivot column. (A variation of this argument could be made using linear independence and Exercise 36 in Section 1.7.) Since each pivot is in a different row, A must have at least as many rows as columns. 32. Take any b in ๏’ m . By hypothesis, ADb = Imb = b. Rewrite this equation as A(Db) = b. Thus, the vector x = Db satisfies Ax = b. This proves that the equation Ax = b has a solution for each b in ๏’ m . By Theorem 4 in Section 1.4, A has a pivot position in each row. Since each pivot is in a different column, A must have at least as many columns as rows. 33. By Exercise 31, the equation CA = In implies that (number of rows in A) > (number of columns), that is, m > n. By Exercise 32, the equation AD = Im implies that (number of rows in A) < (number of columns), that is, m 1. Note: Exercise 25 is good for mathematics and computer science students. The solution of Exercise 25 in the Study Guide shows students how to use the principle of induction. The Study Guide also has an Copyright ยฉ 2021 Pearson Education, Inc. 2.4 – Partitioned Matrices 2-29 appendix on โ€œThe Principle of Induction,โ€ at the end of Section 2.4. The text presents more applications of induction in Section 3.2 and in the Supplementary Exercises for Chapter 3. ๏ƒฉ1 ๏ƒช1 ๏ƒช 26. Let An = ๏ƒช1 ๏ƒช ๏ƒช๏ ๏ƒช๏ƒซ1 0 1 1 1 0 ๏Œ 0 1 ๏ 1 ๏Œ 0๏ƒน ๏ƒฉ 1 ๏ƒบ ๏ƒช โˆ’1 0๏ƒบ ๏ƒช 0 ๏ƒบ , Bn = ๏ƒช 0 ๏ƒบ ๏ƒช ๏ƒบ ๏ƒช ๏ ๏ƒช๏ƒซ 0 1๏ƒบ๏ƒป 0 1 โˆ’1 0 0 1 ๏ ๏Œ 0๏ƒน 0 ๏ƒบ๏ƒบ 0๏ƒบ . ๏ƒบ ๏ƒบ 1๏ƒบ๏ƒป ๏Œ ๏ โˆ’1 By direct computation A2B2 = I2. Assume that for n = k, the matrix AkBk is Ik, and write ๏ƒฉ1 Ak +1 = ๏ƒช ๏ƒซv ๏ƒฉ1 0T ๏ƒน ๏ƒบ and Bk +1 = ๏ƒช Ak ๏ƒป ๏ƒซw 0T ๏ƒน ๏ƒบ Bk ๏ƒป where v and w are in ๏’ k , vT = [1 1 โ‹… โ‹… โ‹… 1], and wT = [โ€“1 0 โ‹… โ‹… โ‹… 0]. Then ๏ƒฉ1 Ak +1 Bk +1 = ๏ƒช ๏ƒซv 0T ๏ƒน ๏ƒฉ 1 ๏ƒบ๏ƒช Ak ๏ƒป ๏ƒซ w T 0T ๏ƒน ๏ƒฉ 1 + 0 w = ๏ƒบ ๏ƒช Bk ๏ƒป ๏ƒช๏ƒซ v + Ak w 0T + 0T B k ๏ƒน ๏ƒฉ 1 ๏ƒบ=๏ƒช v0T + Ak Bk ๏ƒบ๏ƒป ๏ƒซ 0 0T ๏ƒน ๏ƒบ = I k +1 Ik ๏ƒป The (2,1)-entry is 0 because v equals the first column of Ak., and Akw is โ€“1 times the first column of Ak. By the principle of induction, AnBn = In for all n > 2. Since An and Bn are square, the IMT shows โˆ’1 that these matrices are invertible, and Bn = An . Note: An induction proof can also be given using partitions with the form shown below. The details are slightly more complicated. ๏ƒฉ Ak Ak +1 = ๏ƒช T ๏ƒซv 0๏ƒน ๏ƒฉ Bk ๏ƒบ and Bk +1 = ๏ƒช T 1๏ƒป ๏ƒซw ๏ƒฉ Ak Ak +1 Bk +1 = ๏ƒช T ๏ƒซv 0 ๏ƒน ๏ƒฉ Bk ๏ƒบ๏ƒช 1๏ƒป ๏ƒซwT 0๏ƒน ๏ƒบ 1๏ƒป 0 ๏ƒน ๏ƒฉ Ak Bk + 0w T ๏ƒบ=๏ƒช 1 ๏ƒป ๏ƒช๏ƒซ v T Bk + w T Ak 0 + 0 ๏ƒน ๏ƒฉ I k ๏ƒบ=๏ƒช T v T 0 + 1 ๏ƒบ๏ƒป ๏ƒซ 0 0๏ƒน ๏ƒบ = I k +1 1๏ƒป The (2,1)-entry is 0T because vT times a column of Bk equals the sum of the entries in the column, and all of such sums are zero except the last, which is 1. So vTBk is the negative of wT. By the principle of induction, AnBn = In for all n > 2. Since An and Bn are square, the IMT shows that these matrices are โˆ’1 invertible, and Bn = An . 27. First, visualize a partition of A as a 2ร—2 blockโ€“diagonal matrix, as below, and then visualize the (2,2)-block itself as a block-diagonal matrix. That is, ๏ƒฉ1 ๏ƒช3 ๏ƒช A = ๏ƒช0 ๏ƒช ๏ƒช0 ๏ƒช๏ƒซ 0 2 5 0 0 0 0 0 2 0 0 0 0 0 7 5 0๏ƒน 0 ๏ƒบ๏ƒบ ๏ƒฉ A11 0๏ƒบ = ๏ƒช ๏ƒบ ๏ƒช 0 8๏ƒบ ๏ƒซ 6 ๏ƒบ๏ƒป ๏ƒฉ2 0๏ƒน ๏ƒช , where A22 = ๏ƒช 0 A22 ๏ƒบ๏ƒป๏ƒบ ๏ƒช๏ƒซ 0 0 7 5 0๏ƒน ๏ƒฉ2 8๏ƒบ๏ƒบ = ๏ƒช 0 6๏ƒบ๏ƒป ๏ƒซ Copyright ยฉ 2021 Pearson Education, Inc. 0๏ƒน B ๏ƒบ๏ƒป 2-30 Chapter 2 ๏ฌ Matrix Algebra ๏ƒฉ 3 Observe that B is invertible and Bโ€“1 = ๏ƒช ๏ƒซโˆ’2.5 0 ๏ƒฉ.5 ๏ƒน ๏ƒฉ.5 ๏ƒช โˆ’1 3 โˆ’4 ๏ƒบ๏ƒบ = ๏ƒช 0 invertible, and A22 = ๏ƒช ๏ƒช 0 ๏ƒช ๏ƒบ ๏ƒช๏ƒซ 0 โˆ’ 2.5 3.5 ๏ƒซ ๏ƒป โˆ’4 ๏ƒน . By Exercise 15, the block diagonal matrix A22 is 3.5๏ƒบ๏ƒป 0 3 โˆ’2.5 0 ๏ƒน โˆ’4 ๏ƒบ๏ƒบ 3.5๏ƒบ๏ƒป ๏ƒฉโˆ’5 Next, observe that A11 is also invertible, with inverse ๏ƒช ๏ƒซ 3 invertible, and its inverse is block diagonal: ๏ƒฉ A โˆ’1 Aโˆ’1 = ๏ƒช 11 ๏ƒช๏ƒซ 0 ๏ƒฉ โˆ’5 2 ๏ƒช 3 โˆ’1 0 ๏ƒน ๏ƒช ๏ƒบ=๏ƒช โˆ’1 A22 ๏ƒบ๏ƒป ๏ƒช 0 ๏ƒช ๏ƒช ๏ƒซ 0 .5 0 0 0 3 โˆ’2.5 ๏ƒน ๏ƒฉ โˆ’5 ๏ƒบ ๏ƒช ๏ƒบ ๏ƒช 3 0 ๏ƒบ=๏ƒช 0 ๏ƒบ โˆ’4 ๏ƒบ ๏ƒช๏ƒช 0 3.5 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 2๏ƒน . By Exercise 15, A itself is โˆ’1๏ƒบ๏ƒป 2 โˆ’1 0 0 0 0 0 .5 0 0 0 0 0 3 โˆ’2.5 0๏ƒน 0 ๏ƒบ๏ƒบ 0๏ƒบ ๏ƒบ โˆ’4 ๏ƒบ 3.5 ๏ƒบ๏ƒป 28. This exercise and the next, which involve large matrices, are more appropriate for MATLAB, Maple, and Mathematica, than for the graphic calculators. a. Display the submatrix of A obtained from rows 15 to 20 and columns 5 to 10. A(15:20, 5:10) MATLAB: Maple: submatrix(A, 15..20, 5..10) Mathematica: Take[ A, {15,20}, {5,10} ] b. Insert a 5ร—10 matrix B into rows 10 to 14 and columns 20 to 29 of matrix A: MATLAB: A(10:14, 20:29) = B ; The semicolon suppresses output display. Maple: copyinto(B, A, 10, 20): The colon suppresses output display. Mathematica: For [ i=10, i<=14, i++, For [ j=20, j p. For j = 1, โ€ฆ, q, the vector aj is in W. Since the columns of B span W, the vector aj is in the column space of B. That is, aj = Bcj for some vector cj of weights. Note that cj is in ๏’ p because B has p columns. b. Let C = [c1 โ‹… โ‹… โ‹… cq]. Then C is a pร—q matrix because each of the q columns is in ๏’ p . By hypothesis, q is larger than p, so C has more columns than rows. By a theorem, the columns of C are linearly dependent and there exists a nonzero vector u in ๏’ q such that Cu = 0. Copyright ยฉ 2021 Pearson Education, Inc. 2-72 Chapter 2 ๏ฌ Matrix Algebra c. From part (a) and the definition of matrix multiplication A = [a1 โ‹… โ‹… โ‹… aq] = [Bc1 โ‹… โ‹… โ‹… Bcq] = BC. From part (b), Au = (BC ) u = B(Cu) = B0 = 0. Since u is nonzero, the columns of A are linearly dependent. 36. If ๏ contained more vectors than ๏‚ , then ๏ would be linearly dependent, by Exercise 35, because ๏‚ spans W. Repeat the argument with ๏‚ and ๏ interchanged to conclude that ๏‚ cannot contain more vectors than ๏ . 37. Apply the matrix command rref to the matrix [v1 v2 ๏ƒฉ 11 ๏ƒช โˆ’5 x]: ๏ƒช ๏ƒช 10 ๏ƒช ๏ƒซ๏ƒช 7 14 โˆ’8 13 10 19 ๏ƒน ๏ƒฉ 1 โˆ’13๏ƒบ๏ƒบ ๏ƒช๏ƒช 0 ~ 18 ๏ƒบ ๏ƒช 0 ๏ƒบ ๏ƒช 15 ๏ƒป๏ƒบ ๏ƒซ๏ƒช 0 0 1 0 0 โˆ’1.667 ๏ƒน 2.667 ๏ƒบ๏ƒบ 0 ๏ƒบ ๏ƒบ 0 ๏ƒป๏ƒบ The equation c1v1 + c2v2 = x is consistent, so x is in the subspace H. The decimal approximations suggest c1 = โ€“5/3 and c2 = 8/3, and it can be checked that these values are precise. Thus, the Bcoordinate of x is (โ€“5/3, 8/3). 38. Apply the matrix command rref to the matrix [v1 v2 v3 x]: ๏ƒฉ โˆ’6 ๏ƒช 4 ๏ƒช ๏ƒช โˆ’9 ๏ƒช ๏ƒซ๏ƒช 4 8 โˆ’3 7 โˆ’3 โˆ’9 5 โˆ’8 3 4๏ƒน ๏ƒฉ 1 7 ๏ƒบ๏ƒบ ๏ƒช๏ƒช 0 ~ โˆ’8 ๏ƒบ ๏ƒช 0 ๏ƒบ ๏ƒช 3๏ƒป๏ƒบ ๏ƒซ๏ƒช 0 0 1 0 0 0 0 1 0 3๏ƒน 5๏ƒบ๏ƒบ 2๏ƒบ ๏ƒบ 0 ๏ƒป๏ƒบ The first three columns of [v1 v2 v3 x] are pivot columns, so v1, v2 and v3 are linearly independent. Thus v1, v2 and v3 form a basis B for the subspace H which they span. View [v1 v2 v3 x] as an augmented matrix for c1v1 + c2v2 + c3v3 = x. The reduced echelon form shows that x is in H and ๏ƒฉ 3๏ƒน ๏ƒช ๏ƒบ [x]B = ๏ƒช 5๏ƒบ . ๏ƒช๏ƒซ 2 ๏ƒบ๏ƒป Notes: The Study Guide for Section 2.9 contains a complete list of the statements in the Invertible Matrix Theorem that have been given so far. The format is the same as that used in Section 2.3, with three columns: statements that are logically equivalent for any mร—n matrix and are related to existence concepts, those that are equivalent only for any nร—n matrix, and those that are equivalent for any nร—p matrix and are related to uniqueness concepts. Four statements are included that are not in the textโ€™s official list of statements, to give more symmetry to the three columns. The Study Guide section also contains directions for making a review sheet for โ€œdimensionโ€ and โ€œrank.โ€ Chapter 2 – Supplementary Exercises 1. True. If A and B are mร—n matrices, then BT has as many rows as A has columns, so ABT is defined. Also, ATB is defined because AT has m columns and B has m rows. 2. False. B must have 2 columns. A has as many columns as B has rows. 3. True. The ith row of A has the form (0, โ€ฆ, di, โ€ฆ, 0). So the ith row of AB is (0, โ€ฆ, di, โ€ฆ, 0)B, which is di times the ith row of B. Copyright ยฉ 2021 Pearson Education, Inc. Chapter 2 – Supplementary Exercises 2-73 4. False. Take the zero matrix for B. Or, construct a matrix B such that the equation Bx = 0 has nontrivial solutions, and construct C and D so that C โ‰  D and the columns of C โ€“ D satisfy the equation Bx = 0. Then B(C โ€“ D) = 0 and BC = BD. ๏ƒฉ 1 0๏ƒน ๏ƒฉ0 0 ๏ƒน 5. False. Counterexample: A = ๏ƒช and C = ๏ƒช ๏ƒบ ๏ƒบ. ๏ƒซ0 0 ๏ƒป ๏ƒซ0 1๏ƒป 6. False. (A + B)(A โ€“ B) = A2 โ€“ AB + BA โ€“ B2. This equals A2 โ€“ B2 if and only if A commutes with B. 7. True. An nร—n replacement matrix has n + 1 nonzero entries. The nร—n scale and interchange matrices have n nonzero entries. 8. True. The transpose of an elementary matrix is an elementary matrix of the same type. 9. True. An nร—n elementary matrix is obtained by a row operation on In. 10. False. Elementary matrices are invertible, so a product of such matrices is invertible. But not every square matrix is invertible. 11. True. If A is 3ร—3 with three pivot positions, then A is row equivalent to I3. 12. False. A must be square in order to conclude from the equation AB = I that A is invertible. 13. False. AB is invertible, but (AB)โ€“1 = Bโ€“1Aโ€“1, and this product is not always equal to Aโ€“1Bโ€“1. 14. True. Given AB = BA, left-multiply by Aโ€“1 to get B = Aโ€“1BA, and then right-multiply by Aโ€“1 to obtain BAโ€“1 = Aโ€“1B. 15. False. The correct equation is (rA)โ€“1 = rโ€“1Aโ€“1, because (rA)(rโ€“1Aโ€“1) = (rrโ€“1)(AAโ€“1) = 1โ‹…I = I. 16. C = (C โ€“ 1 ) โ€“1 = 1 ๏ƒฉ 7 โˆ’2 ๏ƒช๏ƒซโˆ’6 ๏ƒฉ0 17. A = ๏ƒช 1 ๏ƒช ๏ƒช๏ƒซ 0 0๏ƒน 0 ๏ƒบ๏ƒบ , 0 ๏ƒบ๏ƒป 0 0 1 ๏ƒฉ0 A = A โ‹… A = ๏ƒช๏ƒช1 ๏ƒซ๏ƒช 0 3 2 โˆ’5๏ƒน ๏ƒฉโˆ’7 / 2 = 4๏ƒบ๏ƒป ๏ƒช๏ƒซ 3 5/ 2๏ƒน โˆ’2 ๏ƒบ๏ƒป ๏ƒฉ0 0 0 ๏ƒน ๏ƒฉ0 0 0 ๏ƒน ๏ƒฉ0 A = ๏ƒช๏ƒช 1 0 0 ๏ƒบ๏ƒบ ๏ƒช๏ƒช1 0 0 ๏ƒบ๏ƒบ = ๏ƒช๏ƒช 0 ๏ƒช๏ƒซ 0 1 0 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 1 0 ๏ƒบ๏ƒป ๏ƒช๏ƒซ1 0 0๏ƒน ๏ƒฉ0 0 0๏ƒน ๏ƒฉ0 0 0๏ƒน 0 0 ๏ƒบ๏ƒบ ๏ƒช๏ƒช 0 0 0 ๏ƒบ๏ƒบ = ๏ƒช๏ƒช 0 0 0 ๏ƒบ๏ƒบ 1 0 ๏ƒป๏ƒบ ๏ƒซ๏ƒช1 0 0 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 0 0 ๏ƒป๏ƒบ 2 0 0 0 0๏ƒน 0 ๏ƒบ๏ƒบ 0 ๏ƒบ๏ƒป Next, ( I โˆ’ A)( I + A + A2 ) = I + A + A2 โˆ’ A( I + A + A2 ) = I + A + A2 โˆ’ A โˆ’ A2 โˆ’ A3 = I โˆ’ A3 . Since A3 = 0, ( I โˆ’ A)( I + A + A2 ) = I . 18. From Exercise 17, the inverse of I โ€“ A is probably I + A + A2 + โ‹… โ‹… โ‹… + Anโ€“1. To verify this, compute ( I โˆ’ A)( I + A + ๏Œ + Anโˆ’1 ) = I + A + ๏Œ + Anโˆ’1 โˆ’ A( I + A + ๏Œ + Anโˆ’1 ) = I โˆ’ AAnโˆ’1 = I โˆ’ An If An = 0, then the matrix B = I + A + A2 + โ‹… โ‹… โ‹… + Anโ€“1 satisfies (I โ€“ A)B = I. Since I โ€“ A and B are square, they are invertible by the Invertible Matrix Theorem, and B is the inverse of I โ€“ A. 19. A2 = 2A โ€“ I. Multiply by A: A3 = 2A2 โ€“ A. Substitute A2 = 2A โ€“ I: A3 = 2(2A โ€“ I ) โ€“ A = 3A โ€“ 2I. Multiply by A again: A4 = A(3A โ€“ 2I) = 3A2 โ€“ 2A. Substitute the identity A2 = 2A โ€“ I again. Finally, A4 = 3(2A โ€“ I) โ€“ 2A = 4A โ€“ 3I. Copyright ยฉ 2021 Pearson Education, Inc. 2-74 Chapter 2 ๏ƒฉ1 20. Let A = ๏ƒช ๏ƒซ0 BA. ๏ฌ Matrix Algebra 0๏ƒน ๏ƒฉ0 1๏ƒน ๏ƒฉ 0 1๏ƒน and B = ๏ƒช . By direct computation, A2 = I, B2 = I, and AB = ๏ƒช ๏ƒบ =โ€“ ๏ƒบ ๏ƒบ โˆ’1๏ƒป ๏ƒซ 1 0๏ƒป ๏ƒซโˆ’1 0๏ƒป 21. (Partial answer in Study Guide) Since Aโ€“1B is the solution of AX = B, row reduction of [A B] to [I X] will produce X = Aโ€“1B. See Exercise 22 in Section 2.2. [A ๏ƒฉ1 B ] = ๏ƒช๏ƒช 2 ๏ƒช๏ƒซ 1 ๏ƒฉ1 3 8 ~ ๏ƒช๏ƒช0 1 3 ๏ƒช๏ƒซ0 0 1 ๏ƒฉ 10 โˆ’1๏ƒน ๏ƒช 9 10๏ƒบ ๏ƒช ๏ƒบ. ๏ƒช๏ƒซ โˆ’5 โˆ’3๏ƒบ๏ƒป โˆ’3 1 3 5๏ƒน ๏ƒฉ 1 5๏ƒบ๏ƒบ ~ ๏ƒช๏ƒช0 4๏ƒบ๏ƒป ๏ƒช๏ƒซ0 3 โˆ’2 โˆ’1 8 โˆ’5 โˆ’3 5๏ƒน ๏ƒฉ 1 1๏ƒบ๏ƒบ ~ ๏ƒช๏ƒช0 โˆ’3๏ƒบ๏ƒป ๏ƒช๏ƒซ0 3 1 0 37 9 โˆ’5 29๏ƒน ๏ƒฉ 1 10๏ƒบ๏ƒบ ~ ๏ƒช๏ƒช0 โˆ’3๏ƒบ๏ƒป ๏ƒช๏ƒซ0 3 8 4 11 2 5 โˆ’3 โˆ’6 โˆ’5 0 0 1 โˆ’3 7 6 5๏ƒน ๏ƒฉ 1 โˆ’5๏ƒบ๏ƒบ ~ ๏ƒช๏ƒช0 โˆ’1๏ƒบ๏ƒป ๏ƒช๏ƒซ0 0 1 0 0 0 1 3 1 โˆ’2 10 9 โˆ’5 8 3 โˆ’5 โˆ’3 โˆ’6 7 5๏ƒน 1๏ƒบ๏ƒบ โˆ’5๏ƒบ๏ƒป โˆ’1๏ƒน 10๏ƒบ๏ƒบ . Thus, Aโ€“1B = โˆ’3๏ƒบ๏ƒป ๏ƒฉ 1 2๏ƒน ๏ƒฉ1 3๏ƒน 22. By definition of matrix multiplication, the matrix A satisfies A ๏ƒช ๏ƒบ=๏ƒช ๏ƒบ. ๏ƒซ3 7๏ƒป ๏ƒซ1 1๏ƒป ๏ƒฉ1 Right-multiply both sides by the inverse of ๏ƒช ๏ƒซ3 1๏ƒน ๏ƒฉ1 3๏ƒน ๏ƒฉ 7 โˆ’2๏ƒน ๏ƒฉ โˆ’2 A= ๏ƒช =๏ƒช . ๏ƒบ ๏ƒช ๏ƒบ 1๏ƒป ๏ƒซ 4 โˆ’1๏ƒบ๏ƒป ๏ƒซ1 1๏ƒป ๏ƒซโˆ’3 2๏ƒน . The left side becomes A. Thus, 7 ๏ƒบ๏ƒป ๏ƒฉ 5 4๏ƒน ๏ƒฉ7 3๏ƒน โ€“1 and B = ๏ƒช 23. Given AB = ๏ƒช ๏ƒบ ๏ƒบ , notice that ABB = A. Since det B = 7 โ€“ 6 =1, ๏ƒซโˆ’2 3๏ƒป ๏ƒซ2 1๏ƒป ๏ƒฉ 1 โˆ’3๏ƒน ๏ƒฉ 5 4๏ƒน ๏ƒฉ 1 โˆ’3๏ƒน ๏ƒฉ โˆ’3 13๏ƒน Bโˆ’1 = ๏ƒช and A = ( AB) Bโˆ’1 = ๏ƒช = ๏ƒบ ๏ƒบ๏ƒช 7๏ƒป 7๏ƒบ๏ƒป ๏ƒช๏ƒซโˆ’8 27๏ƒบ๏ƒป ๏ƒซโˆ’2 ๏ƒซโˆ’2 3๏ƒป ๏ƒซโˆ’2 Note: Variants of this question make simple exam questions. 24. Since A is invertible, so is AT, by the Invertible Matrix Theorem. Then ATA is the product of invertible matrices and so is invertible. Thus, the formula (ATA)โ€“1AT makes sense. By Theorem 6 in Section 2.2, (ATA)โ€“1โ‹…AT = Aโ€“1(AT)โ€“1AT = Aโ€“1I = Aโ€“1 An alternative calculation: (ATA)โ€“1ATโ‹…A = (ATA)โ€“1(ATA) = I. Since A is invertible, this equation shows that its inverse is (ATA)โ€“1AT. ๏ƒฉ c0 ๏ƒน ๏ ๏ƒบ๏ƒบ = row i (V )c . ๏ƒช๏ƒซcnโˆ’1 ๏ƒบ๏ƒป ๏ƒช 25. a. For i = 1,โ€ฆ, n, p(xi) = c0 + c1xi + โ‹… โ‹… โ‹… + cnโˆ’1xinโˆ’1 = row i (V ) โ‹… ๏ƒช Copyright ยฉ 2021 Pearson Education, Inc. Chapter 2 – Supplementary Exercises 2-75 By a property of matrix multiplication, shown after Example 6 in Section 2.1, and the fact that c was chosen to satisfy Vc= y, row i (V ) c = row i (V c ) = row i ( y ) = yi Thus, p(xi) = yi. To summarize, the entries in Vc are the values of the polynomial p(x) at x1, โ€ฆ, xn. b. Suppose x1, โ€ฆ, xn are distinct, and suppose Vc = 0 for some vector c. Then the entries in c are the coefficients of a polynomial whose value is zero at the distinct points x1, …, xn. However, a nonzero polynomial of degree n โ€“ 1 cannot have n zeros, so the polynomial must be identically zero. That is, the entries in c must all be zero. This shows that the columns of V are linearly independent. c. (Solution in Study Guide) When x1, โ€ฆ, xn are distinct, the columns of V are linearly independent, by (b). By the Invertible Matrix Theorem, V is invertible and its columns span ๏’ n . So, for every y = (y1, โ€ฆ, yn) in ๏’ n , there is a vector c such that Vc = y. Let p be the polynomial whose coefficients are listed in c. Then, by (a), p is an interpolating polynomial for (x1, y1), โ€ฆ, (xn, yn). 26. If A = LU, then col1(A) = Lโ‹…col1(U). Since col1(U) has a zero in every entry except possibly the first, Lโ‹…col1(U) is a linear combination of the columns of L in which all weights except possibly the first are zero. So col1(A) is a multiple of col1(L). Similarly, col2(A) = Lโ‹…col2(U), which is a linear combination of the columns of L using the first two entries in col2(U) as weights, because the other entries in col2(U) are zero. Thus col2(A) is a linear combination of the first two columns of L. 27. a. P2 = (uuT)(uuT) = u(uTu)uT = u(1)uT = P, because u satisfies uTu = 1. b. PT = (uuT)T = uTTuT = uuT = P c. Q2 = (I โ€“ 2P)(I โ€“ 2P) = I โ€“ I(2P) โ€“ 2PI + 2P(2P) = I โ€“ 4P + 4P2 = I, because of part (a). ๏ƒฉ0 ๏ƒน 28. Given u = ๏ƒช 0 ๏ƒบ , define P and Q as in Exercise 27 by ๏ƒช ๏ƒบ ๏ƒซ๏ƒช 1 ๏ƒป๏ƒบ ๏ƒฉ0 ๏ƒน P = uu = ๏ƒช๏ƒช 0 ๏ƒบ๏ƒบ [ 0 ๏ƒช๏ƒซ1 ๏ƒบ๏ƒป T 0 ๏ƒฉ0 1] = ๏ƒช๏ƒช 0 ๏ƒช๏ƒซ 0 ๏ƒฉ0 ๏ƒฉ1 ๏ƒน ๏ƒช ๏ƒบ If x = 5 , then Px = ๏ƒช 0 ๏ƒช ๏ƒบ ๏ƒช ๏ƒช๏ƒซ 3 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 0 0 0 0 0 0 0๏ƒน ๏ƒฉ1 ๏ƒบ 0 ๏ƒบ , Q = I โˆ’ 2 P = ๏ƒช๏ƒช0 ๏ƒช๏ƒซ0 1 ๏ƒบ๏ƒป 0 ๏ƒน ๏ƒฉ1 ๏ƒน ๏ƒฉ 0 ๏ƒน 0 ๏ƒบ๏ƒบ ๏ƒช๏ƒช5 ๏ƒบ๏ƒบ = ๏ƒช๏ƒช0 ๏ƒบ๏ƒบ 1 ๏ƒบ๏ƒป ๏ƒช๏ƒซ3๏ƒบ๏ƒป ๏ƒช๏ƒซ 3 ๏ƒบ๏ƒป ๏ƒฉ1 and Qx = ๏ƒช๏ƒช0 ๏ƒช๏ƒซ0 0๏ƒน ๏ƒฉ0 ๏ƒบ 0 ๏ƒบ โˆ’ 2 ๏ƒช๏ƒช 0 ๏ƒช๏ƒซ 0 1 ๏ƒบ๏ƒป 0 1 0 0 1 0 0 0 0 0 ๏ƒน ๏ƒฉ1 0 ๏ƒบ๏ƒบ = ๏ƒช๏ƒช 0 1 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 0 1 0 0๏ƒน 0 ๏ƒบ๏ƒบ โˆ’1๏ƒบ๏ƒป 0 ๏ƒน ๏ƒฉ 1๏ƒน ๏ƒฉ 1๏ƒน 0 ๏ƒบ๏ƒบ ๏ƒช๏ƒช5 ๏ƒบ๏ƒบ = ๏ƒช๏ƒช 5๏ƒบ๏ƒบ . โˆ’1๏ƒบ๏ƒป ๏ƒช๏ƒซ3๏ƒบ๏ƒป ๏ƒช๏ƒซ โˆ’3๏ƒบ๏ƒป 29. Left-multiplication by an elementary matrix produces an elementary row operation: B ~ E1 B ~ E 2 E1 B ~ E 3 E 2 E1 B = C , so B is row equivalent to C. Since row operations are reversible, C is row equivalent to B. (Alternatively, show C being changed into B by row operations using the inverse of the Ei .) 30. Since A is not invertible, there is a nonzero vector v in ๏’ n such that Av = 0. Place n copies of v into an nร—n matrix B. Then AB = A[v โ‹… โ‹… โ‹… v] = [Av โ‹… โ‹… โ‹… Av] = 0. Copyright ยฉ 2021 Pearson Education, Inc. 2-76 Chapter 2 Matrix Algebra ๏ฌ 31. Let A be a 6ร—4 matrix and B a 4ร—6 matrix. Since B has more columns than rows, its six columns are linearly dependent and there is a nonzero x such that Bx = 0. Thus ABx = A0 = 0. This shows that the matrix AB is not invertible, by the IMT. (Basically the same argument was used to solve Exercise 30 in Section 2.1.) Note: (In the Study Guide) It is possible that BA is invertible. For example, let C be an invertible 4ร—4 ๏ƒฉC ๏ƒน โˆ’1 matrix and construct A = ๏ƒช ๏ƒบ and B = [C 0]. Then BA = I4, which is invertible. 0 ๏ƒซ ๏ƒป 32. By hypothesis, A is 5ร—3, C is 3ร—5, and AC = I3. Suppose x satisfies Ax = b. Then CAx = Cb. Since CA = I, x must be Cb. This shows that Cb is the only solution of Ax = b. ๏ƒฉ.4 .2 .3๏ƒน ๏ƒฉ .31 33. Let A = ๏ƒช .3 .6 .3๏ƒบ . Then A2 = ๏ƒช.39 ๏ƒช ๏ƒบ ๏ƒช ๏ƒช๏ƒซ .3 .2 .4 ๏ƒบ๏ƒป ๏ƒช๏ƒซ.30 calculations by computing ๏ƒฉ.2875 A = A A = ๏ƒช๏ƒช .4251 ๏ƒซ๏ƒช.2874 4 2 .2834 .4332 .2834 2 .2874 ๏ƒน .4251๏ƒบ๏ƒบ , .2875 ๏ƒบ๏ƒป .26 .48 .26 .30 ๏ƒน .39 ๏ƒบ๏ƒบ . Instead of computing A3 next, speed up the .31๏ƒบ๏ƒป ๏ƒฉ.2857 A = A A = ๏ƒช๏ƒช.4285 ๏ƒซ๏ƒช.2857 8 4 4 ๏ƒฉ.2857 To four decimal places, as k increases, A โ†’ ๏ƒช.4286 ๏ƒช ๏ƒช๏ƒซ.2857 k ๏ƒฉ2 / 7 A โ†’ ๏ƒช๏ƒช 3/ 7 ๏ƒช๏ƒซ 2 / 7 ๏ƒฉ0 If B = ๏ƒช .1 ๏ƒช ๏ƒซ๏ƒช.9 .3๏ƒน .3๏ƒบ๏ƒบ , .4 ๏ƒบ๏ƒป .2 .6 .2 .2857 ๏ƒน .4286 ๏ƒบ๏ƒบ , or, in rational format, .2857 ๏ƒบ๏ƒป 2 / 7๏ƒน 3/ 7 ๏ƒบ๏ƒบ . 2 / 7 ๏ƒบ๏ƒป 2/7 3/ 7 2/7 k .2857 .4286 .2857 .2857 ๏ƒน .4285๏ƒบ๏ƒบ .2857 ๏ƒบ๏ƒป .2857 .4286 .2857 then ๏ƒฉ.29 B = ๏ƒช๏ƒช.33 ๏ƒซ๏ƒช.38 2 .18 .44 .38 ๏ƒฉ.2119 .18 ๏ƒน 4 ๏ƒช ๏ƒบ .33๏ƒบ , B = ๏ƒช .3663 .49 ๏ƒบ๏ƒป ๏ƒซ๏ƒช.4218 .1998 .3764 .4218 ๏ƒฉ.2024 B = ๏ƒช.3707 ๏ƒช ๏ƒซ๏ƒช.4269 .2022 .3709 .4269 .2022๏ƒน .3707๏ƒบ . To four decimal places, as k increases, ๏ƒบ .4271๏ƒบ๏ƒป ๏ƒฉ.2022 B โ†’ ๏ƒช๏ƒช.3708 ๏ƒช๏ƒซ.4270 .2022 .3708 .4270 .2022 ๏ƒน ๏ƒฉ18 / 89 ๏ƒบ k .3708๏ƒบ , or, in rational format, B โ†’ ๏ƒช๏ƒช 33/ 89 ๏ƒช๏ƒซ38 / 89 .4270 ๏ƒบ๏ƒป 8 k 18 / 89 33/ 89 38 / 89 .1998๏ƒน .3663๏ƒบ , ๏ƒบ .4339 ๏ƒบ๏ƒป 18 / 89 ๏ƒน 33/ 89 ๏ƒบ๏ƒบ . 38 / 89 ๏ƒบ๏ƒป 34. The 4ร—4 matrix A4 is the 4ร—4 matrix of ones, minus the 4ร—4 identity matrix. The MATLAB command is A4 = ones(4) โ€“ eye(4). For the inverse, use inv(A4). ๏ƒฉ0 ๏ƒช1 A4 = ๏ƒช ๏ƒช1 ๏ƒช ๏ƒซ1 1 0 1 1 1 1 0 1 1๏ƒน 1๏ƒบ ๏ƒบ, 1๏ƒบ ๏ƒบ 0๏ƒป ๏ƒฉ โˆ’2 / 3 ๏ƒช 1/ 3 A4โˆ’1 = ๏ƒช ๏ƒช 1/ 3 ๏ƒช ๏ƒซ 1/ 3 1/ 3 โˆ’2 / 3 1/ 3 1/ 3 1/ 3 1/ 3 โˆ’2 / 3 1/ 3 1/ 3๏ƒน 1/ 3๏ƒบ ๏ƒบ 1/ 3๏ƒบ ๏ƒบ โˆ’2 / 3 ๏ƒป Copyright ยฉ 2021 Pearson Education, Inc. Chapter 2 – Supplementary Exercises ๏ƒฉ0 ๏ƒช1 ๏ƒช A5 = ๏ƒช 1 ๏ƒช ๏ƒช1 ๏ƒช๏ƒซ 1 1 1 1 0 1 1 0 1 1 1 1 0 1 1 1 ๏ƒฉ0 ๏ƒช1 ๏ƒช ๏ƒช1 A6 = ๏ƒช ๏ƒช1 ๏ƒช1 ๏ƒช ๏ƒซ๏ƒช 1 1 0 1 1 1 1 1 1 0 1 1 1 1๏ƒน 1๏ƒบ๏ƒบ 1๏ƒบ , ๏ƒบ 1๏ƒบ 0 ๏ƒบ๏ƒป ๏ƒฉ โˆ’3/ 4 ๏ƒช 1/ 4 ๏ƒช A5โˆ’1 = ๏ƒช 1/ 4 ๏ƒช ๏ƒช 1/ 4 ๏ƒช๏ƒซ 1/ 4 1 1 1 0 1 1 1 1 1 1 0 1 1๏ƒน 1๏ƒบ ๏ƒบ 1๏ƒบ ๏ƒบ, 1๏ƒบ 1๏ƒบ ๏ƒบ 0 ๏ƒป๏ƒบ 1/ 4 1/ 4 1/ 4 โˆ’3/ 4 1/ 4 1/ 4 โˆ’3/ 4 1/ 4 1/ 4 1/ 4 1/ 4 โˆ’3/ 4 1/ 4 1/ 4 1/ 4 1/ 4 ๏ƒน 1/ 4 ๏ƒบ๏ƒบ 1/ 4 ๏ƒบ ๏ƒบ 1/ 4 ๏ƒบ โˆ’3/ 4 ๏ƒบ๏ƒป ๏ƒฉ โˆ’4 / 5 ๏ƒช 1/ 5 ๏ƒช ๏ƒช 1/ 5 โˆ’1 A6 = ๏ƒช ๏ƒช 1/ 5 ๏ƒช 1/ 5 ๏ƒช ๏ƒซ๏ƒช 1/ 5 1/ 5 โˆ’4 / 5 1/ 5 1/ 5 1/ 5 1/ 5 1/ 5 1/ 5 โˆ’4 / 5 1/ 5 1/ 5 1/ 5 1/ 5 1/ 5 1/ 5 โˆ’4 / 5 1/ 5 1/ 5 1/ 5 1/ 5 1/ 5 1/ 5 โˆ’4 / 5 1/ 5 2-77 1/ 5๏ƒน 1/ 5๏ƒบ ๏ƒบ 1/ 5๏ƒบ ๏ƒบ 1/ 5๏ƒบ 1/ 5๏ƒบ ๏ƒบ โˆ’4 / 5๏ƒป๏ƒบ The construction of A6 and the appearance of its inverse suggest that the inverse is related to I6. In fact, A6โˆ’1 + I 6 is 1/5 times the 6ร—6 matrix of ones. Let J denotes the nร—n matrix of ones. The conjecture is: 1 โ‹… J โˆ’ In An = J โ€“ In and Anโˆ’1 = n โˆ’1 Proof: (Not required) Observe that J 2 = nJ and An J = (J โ€“ I ) J = J 2 โ€“ J = (n โ€“ 1) J. Now compute An((n โ€“ 1)โ€“1J โ€“ I) = (n โ€“ 1)โ€“1 An J โ€“ An = J โ€“ (J โ€“ I) = I. Since An is square, An is invertible and its inverse is (n โ€“ 1)โ€“1J โ€“ I. Copyright ยฉ 2021 Pearson Education, Inc.

Document Preview (77 of 485 Pages)

User generated content is uploaded by users for the purposes of learning and should be used following SchloarOn's honor code & terms of service.
You are viewing preview pages of the document. Purchase to get full access instantly.

Shop by Category See All


Shopping Cart (0)

Your bag is empty

Don't miss out on great deals! Start shopping or Sign in to view products added.

Shop What's New Sign in