Solution Manual for Linear Algebra and Its Applications, 5th Edition

Preview Extract
2.1 SOLUTIONS Notes: The definition here of a matrix product AB gives the proper view of AB for nearly all matrix calculations. (The dual fact about the rows of A and the rows of AB is seldom needed, mainly because vectors here are usually written as columns.) I assign Exercise 13 and most of Exercises 17โ€“22 to reinforce the definition of AB. Exercises 23 and 24 are used in the proof of the Invertible Matrix Theorem, in Section 2.3. Exercises 23โ€“25 are mentioned in a footnote in Section 2.2. A class discussion of the solutions of Exercises 23โ€“25 can provide a transition to Section 2.2. Or, these exercises could be assigned after starting Section 2.2. Exercises 27 and 28 are optional, but they are mentioned in Example 4 of Section 2.4. Outer products also appear in Exercises 31โ€“34 of Section 4.6 and in the spectral decomposition of a symmetric matrix, in Section 7.1. Exercises 29โ€“33 provide good training for mathematics majors. 1. 0 ๏€ญ1๏ƒน ๏ƒฉ ๏€ญ4 0 2๏ƒน ๏ƒฉ2 ๏€ญ2 A ๏€ฝ (๏€ญ2) ๏ƒช ๏€ฝ๏ƒช . Next, use B โ€“ 2A = B + (โ€“2A): ๏ƒบ 2 ๏ƒป ๏ƒซ ๏€ญ8 10 ๏€ญ4 ๏ƒบ๏ƒป ๏ƒซ 4 ๏€ญ5 1๏ƒน ๏ƒฉ ๏€ญ4 0 2 ๏ƒน ๏ƒฉ 3 ๏€ญ5 3๏ƒน ๏ƒฉ7 ๏€ญ5 B ๏€ญ 2A ๏€ฝ ๏ƒช ๏€ซ๏ƒช ๏€ฝ๏ƒช . ๏ƒบ ๏ƒบ 6 ๏€ญ7 ๏ƒบ๏ƒป ๏ƒซ 1 ๏€ญ4 ๏€ญ3๏ƒป ๏ƒซ ๏€ญ8 10 ๏€ญ4 ๏ƒป ๏ƒซ ๏€ญ7 The product AC is not defined because the number of columns of A does not match the number of rows 1 ๏ƒ— 5 ๏€ซ 2 ๏ƒ— 4 ๏ƒน ๏ƒฉ 1 13๏ƒน ๏ƒฉ 1 2 ๏ƒน ๏ƒฉ 3 5๏ƒน ๏ƒฉ 1 ๏ƒ— 3 ๏€ซ 2(๏€ญ1) ๏€ฝ๏ƒช of C. CD ๏€ฝ ๏ƒช ๏ƒบ ๏ƒช ๏ƒบ ๏ƒบ๏€ฝ๏ƒช ๏ƒบ . For mental computation, the ๏ƒซ ๏€ญ2 1๏ƒป ๏ƒซ ๏€ญ1 4 ๏ƒป ๏ƒซ ๏€ญ2 ๏ƒ— 3 ๏€ซ 1( ๏€ญ1) ๏€ญ2 ๏ƒ— 5 ๏€ซ 1 ๏ƒ— 4 ๏ƒป ๏ƒซ ๏€ญ7 ๏€ญ6 ๏ƒป row-column rule is probably easier to use than the definition. ๏ƒฉ2 2. A ๏€ซ 2 B ๏€ฝ ๏ƒช ๏ƒซ4 0 ๏€ญ5 ๏€ญ1๏ƒน ๏ƒฉ7 ๏€ซ 2๏ƒช ๏ƒบ 2๏ƒป ๏ƒซ1 ๏€ญ5 ๏€ญ4 1๏ƒน ๏ƒฉ 2 ๏€ซ 14 ๏€ฝ ๏€ญ3๏ƒบ๏ƒป ๏ƒช๏ƒซ 4 ๏€ซ 2 0 ๏€ญ 10 ๏€ญ5 ๏€ญ 8 ๏€ญ1 ๏€ซ 2 ๏ƒน ๏ƒฉ16 ๏€ฝ 2 ๏€ญ 6 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 6 ๏€ญ10 ๏€ญ13 1๏ƒน ๏€ญ4 ๏ƒบ๏ƒป The expression 3C โ€“ E is not defined because 3C has 2 columns and โ€“E has only 1 column. 1๏ƒน ๏ƒฉ 1 ๏ƒ— 7 ๏€ซ 2 ๏ƒ— 1 1(๏€ญ5) ๏€ซ 2(๏€ญ4) 1 ๏ƒ— 1 ๏€ซ 2(๏€ญ3) ๏ƒน ๏ƒฉ 9 ๏€ญ13 ๏€ญ5๏ƒน ๏ƒฉ 1 2 ๏ƒน ๏ƒฉ7 ๏€ญ5 CB ๏€ฝ ๏ƒช ๏€ฝ๏ƒช ๏ƒบ ๏ƒช ๏ƒบ ๏ƒบ๏€ฝ๏ƒช 6 ๏€ญ5๏ƒบ๏ƒป ๏ƒซ ๏€ญ2 1๏ƒป ๏ƒซ 1 ๏€ญ4 ๏€ญ3๏ƒป ๏ƒซ ๏€ญ2 ๏ƒ— 7 ๏€ซ 1 ๏ƒ— 1 ๏€ญ2(๏€ญ5) ๏€ซ 1(๏€ญ4) ๏€ญ2 ๏ƒ—1 ๏€ซ 1(๏€ญ3) ๏ƒป ๏ƒซ ๏€ญ13 The product EB is not defined because the number of columns of E does not match the number of rows of B. 2-1 Copyright ยฉ 2016 Pearson Education, Inc. 2-2 CHAPTER 2 โ€ข ๏ƒฉ3 3. 3I 2 ๏€ญ A ๏€ฝ ๏ƒช ๏ƒซ0 Matrix Algebra ๏€ญ1๏ƒน ๏ƒฉ3 ๏€ญ 4 ๏€ฝ ๏€ญ2 ๏ƒบ๏ƒป ๏ƒช๏ƒซ0 ๏€ญ 5 0๏ƒน ๏ƒฉ4 ๏€ญ 3๏ƒบ๏ƒป ๏ƒช๏ƒซ 5 ๏€ญ1๏ƒน ๏ƒฉ12 ๏€ฝ ๏€ญ2๏ƒบ๏ƒป ๏ƒช๏ƒซ15 ๏ƒฉ4 (3I 2 ) A ๏€ฝ 3( I 2 A) ๏€ฝ 3 ๏ƒช ๏ƒซ5 ๏ƒฉ3 (3I 2 ) A ๏€ฝ ๏ƒช ๏ƒซ0 0๏ƒน ๏ƒฉ4 3๏ƒบ๏ƒป ๏ƒช๏ƒซ 5 ๏ƒฉ 9 4. A ๏€ญ 5 I 3 ๏€ฝ ๏ƒช๏ƒช ๏€ญ8 ๏ƒช๏ƒซ ๏€ญ4 ๏€ญ1 7 1 0 ๏€ญ (๏€ญ1) ๏ƒน ๏ƒฉ ๏€ญ1 ๏€ฝ 3 ๏€ญ (๏€ญ2) ๏ƒบ๏ƒป ๏ƒช๏ƒซ ๏€ญ5 ๏€ญ3๏ƒน , or ๏€ญ6 ๏ƒบ๏ƒป ๏€ญ1๏ƒน ๏ƒฉ3 ๏ƒ— 4 ๏€ซ 0 ๏€ฝ ๏€ญ2 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 ๏€ซ 3 ๏ƒ— 5 3(๏€ญ1) ๏€ซ 0 ๏ƒน ๏ƒฉ12 ๏€ฝ 0 ๏€ซ 3(๏€ญ2) ๏ƒบ๏ƒป ๏ƒช๏ƒซ15 ๏€ญ1 2 1 3๏ƒน ๏ƒฉ 5 ๏€ญ6๏ƒบ๏ƒบ ๏€ญ ๏ƒช๏ƒช0 8๏ƒบ๏ƒป ๏ƒช๏ƒซ0 0 5 0 0๏ƒน ๏ƒฉ 4 0 ๏ƒบ๏ƒบ ๏€ฝ ๏ƒช๏ƒช ๏€ญ8 5๏ƒบ๏ƒป ๏ƒช๏ƒซ ๏€ญ4 ๏ƒฉ 9 (5I 3 ) A ๏€ฝ 5( I 3 A) ๏€ฝ 5 A ๏€ฝ 5 ๏ƒช๏ƒช ๏€ญ8 ๏ƒช๏ƒซ ๏€ญ4 ๏€ญ1 7 1 3๏ƒน ๏ƒฉ 45 ๏€ญ6๏ƒบ๏ƒบ ๏€ฝ ๏ƒช๏ƒช ๏€ญ40 8๏ƒบ๏ƒป ๏ƒช๏ƒซ ๏€ญ20 ๏ƒฉ5 0 0๏ƒน ๏ƒฉ 9 (5I 3 ) A ๏€ฝ ๏ƒช๏ƒช0 5 0 ๏ƒบ๏ƒบ ๏ƒช๏ƒช ๏€ญ8 ๏ƒช๏ƒซ0 0 5๏ƒบ๏ƒป ๏ƒช๏ƒซ ๏€ญ4 15๏ƒน ๏ƒฉ 45 ๏€ญ5 ๏ƒช ๏€ฝ ๏€ญ45 35 ๏€ญ30๏ƒบ ๏ƒช ๏ƒบ 20 5 40 ๏ƒช๏€ญ ๏ƒบ ๏ƒซ ๏ƒป ๏ƒฉ ๏€ญ1 5. a. Ab 1 ๏€ฝ ๏ƒช๏ƒช 5 ๏ƒช๏ƒซ 2 AB ๏€ฝ ๏› Ab1 ๏ƒฉ ๏€ญ1 b. ๏ƒช๏ƒช 5 ๏ƒช๏ƒซ 2 ๏ƒฉ ๏€ญ7 Ab 2 ๏ ๏€ฝ ๏ƒช๏ƒช 7 ๏ƒช๏ƒซ 12 AB ๏€ฝ ๏› Ab1 ๏ƒฉ ๏€ญ1 Ab 2 ๏€ฝ ๏ƒช๏ƒช 5 ๏ƒช๏ƒซ 2 ๏€ญ3๏ƒน ๏€ญ6 ๏ƒบ๏ƒป 3๏ƒน ๏€ญ6 ๏ƒบ๏ƒบ 3๏ƒบ๏ƒป 15๏ƒน ๏€ญ30 ๏ƒบ๏ƒบ , or 40 ๏ƒบ๏ƒป 5( ๏€ญ1) ๏€ซ 0 ๏€ซ 0 0 ๏€ซ 5๏ƒ— 7 ๏€ซ 0 0 ๏€ซ 0 ๏€ซ 5 ๏ƒ—1 2๏ƒน ๏ƒฉ 4๏ƒน ๏ƒฉ ๏€ญ2 ๏ƒน ๏ƒช ๏ƒบ ๏ƒบ 4 ๏ƒบ ๏ƒช ๏ƒบ ๏€ฝ ๏ƒช ๏€ญ6 ๏ƒบ 1 ๏€ญ3๏ƒบ๏ƒป ๏ƒซ ๏ƒป ๏ƒช๏ƒซ ๏€ญ7 ๏ƒบ๏ƒป 4๏ƒน ๏€ญ6๏ƒบ๏ƒบ ๏€ญ7 ๏ƒบ๏ƒป ๏ƒฉ ๏€ญ1 ๏ƒ— 3 ๏€ซ 2(๏€ญ2) ๏€ญ2 ๏ƒน ๏ƒช ๏€ฝ 5 ๏ƒ— 3 ๏€ซ 4(๏€ญ2) 1๏ƒบ๏ƒป ๏ƒช ๏ƒช๏ƒซ 2 ๏ƒ— 3 ๏€ญ 3(๏€ญ2) ๏€ญ2 ๏ƒน ๏ƒฉ 0๏ƒน ๏ƒฉ 1๏ƒน ๏ƒช ๏ƒบ ๏ƒบ 0 ๏ƒบ ๏ƒช ๏ƒบ ๏€ฝ ๏ƒช ๏€ญ3๏ƒบ , 2 5๏ƒบ๏ƒป ๏ƒซ ๏ƒป ๏ƒช๏ƒซ 13๏ƒบ๏ƒป ๏ƒฉ 0 Ab 2 ๏ ๏€ฝ ๏ƒช๏ƒช ๏€ญ3 ๏ƒช๏ƒซ 13 ๏€ญ5 35 5 3๏ƒน ๏ƒฉ 5 ๏ƒ— 9 ๏€ซ 0 ๏€ซ 0 ๏€ญ6 ๏ƒบ๏ƒบ ๏€ฝ ๏ƒช 0 ๏€ซ 5( ๏€ญ8) ๏€ซ 0 ๏ƒช 8๏ƒบ๏ƒป ๏ƒช๏ƒซ0 ๏€ซ 0 ๏€ซ 5( ๏€ญ4) 2๏ƒน ๏ƒฉ ๏€ญ7 ๏ƒน ๏ƒฉ 3๏ƒน ๏ƒช ๏ƒบ ๏ƒบ 4๏ƒบ ๏ƒช ๏ƒบ ๏€ฝ ๏ƒช 7 ๏ƒบ , ๏€ญ2 ๏€ญ3๏ƒบ๏ƒป ๏ƒซ ๏ƒป ๏ƒช๏ƒซ 12 ๏ƒบ๏ƒป 2๏ƒน ๏ƒฉ 3 4 ๏ƒบ๏ƒบ ๏ƒช ๏€ญ2 ๏€ญ3๏ƒบ๏ƒป ๏ƒซ ๏ƒฉ 4 6. a. Ab1 ๏€ฝ ๏ƒช๏ƒช ๏€ญ3 ๏ƒช๏ƒซ 3 ๏€ญ1 7 1 1๏ƒน 5๏ƒบ๏ƒป ๏€ญ1(๏€ญ2) ๏€ซ 2 ๏ƒ—1๏ƒน ๏ƒฉ ๏€ญ7 5(๏€ญ2) ๏€ซ 4 ๏ƒ—1๏ƒบ๏ƒบ ๏€ฝ ๏ƒช๏ƒช 7 2(๏€ญ2) ๏€ญ 3 ๏ƒ— 1๏ƒบ๏ƒป ๏ƒช๏ƒซ 12 ๏ƒฉ 4 Ab 2 ๏€ฝ ๏ƒช๏ƒช ๏€ญ3 ๏ƒช๏ƒซ 3 4๏ƒน ๏€ญ6 ๏ƒบ๏ƒบ ๏€ญ7 ๏ƒบ๏ƒป ๏€ญ2 ๏ƒน ๏ƒฉ 14 ๏ƒน ๏ƒฉ 3๏ƒน ๏ƒช ๏ƒบ ๏ƒบ 0 ๏ƒบ ๏ƒช ๏ƒบ ๏€ฝ ๏ƒช ๏€ญ9 ๏ƒบ ๏€ญ1 5๏ƒบ๏ƒป ๏ƒซ ๏ƒป ๏ƒช๏ƒซ 4 ๏ƒบ๏ƒป 14๏ƒน ๏€ญ9๏ƒบ๏ƒบ 4๏ƒบ๏ƒป Copyright ยฉ 2016 Pearson Education, Inc. 5 ๏ƒ— 3 ๏€ซ 0 ๏€ซ 0๏ƒน 0 ๏€ซ 5( ๏€ญ6) ๏€ซ 0๏ƒบ ๏ƒบ 0 ๏€ซ 0 ๏€ซ 5 ๏ƒ— 8๏ƒบ๏ƒป 2.1 ๏ƒฉ 4 b. ๏ƒช๏ƒช ๏€ญ3 ๏ƒช๏ƒซ 3 ๏€ญ2๏ƒน ๏ƒฉ1 0๏ƒบ๏ƒบ ๏ƒช 2 5๏ƒบ๏ƒป ๏ƒซ ๏ƒฉ 4 ๏ƒ—1 ๏€ญ 2 ๏ƒ— 2 3๏ƒน ๏ƒช ๏€ฝ ๏€ญ3 ๏ƒ— 1 ๏€ซ 0 ๏ƒ— 2 ๏€ญ1๏ƒบ๏ƒป ๏ƒช ๏ƒช๏ƒซ 3 ๏ƒ— 1 ๏€ซ 5 ๏ƒ— 2 4 ๏ƒ— 3 ๏€ญ 2(๏€ญ1) ๏ƒน ๏ƒฉ 0 ๏€ญ3 ๏ƒ— 3 ๏€ซ 0(๏€ญ1) ๏ƒบ๏ƒบ ๏€ฝ ๏ƒช๏ƒช ๏€ญ3 3 ๏ƒ— 3 ๏€ซ 5(๏€ญ1) ๏ƒบ๏ƒป ๏ƒช๏ƒซ 13 โ€ข Solutions 2-3 14 ๏ƒน ๏€ญ9 ๏ƒบ๏ƒบ 4 ๏ƒบ๏ƒป 7. Since A has 3 columns, B must match with 3 rows. Otherwise, AB is undefined. Since AB has 7 columns, so does B. Thus, B is 3ร—7. 8. The number of rows of B matches the number of rows of BC, so B has 3 rows. ๏ƒฉ 2 9. AB ๏€ฝ ๏ƒช ๏ƒซ ๏€ญ3 5๏ƒน ๏ƒฉ 4 1๏ƒบ๏ƒป ๏ƒช๏ƒซ 3 ๏€ญ5๏ƒน ๏ƒฉ 23 ๏€ฝ k ๏ƒบ๏ƒป ๏ƒช๏ƒซ ๏€ญ9 ๏€ญ10 ๏€ซ 5k ๏ƒน ๏ƒฉ4 , while BA ๏€ฝ ๏ƒช ๏ƒบ 15 ๏€ซ k ๏ƒป ๏ƒซ3 ๏€ญ5๏ƒน ๏ƒฉ 2 k ๏ƒบ๏ƒป ๏ƒช๏ƒซ ๏€ญ3 5๏ƒน ๏ƒฉ 23 ๏€ฝ 1๏ƒบ๏ƒป ๏ƒช๏ƒซ6 ๏€ญ 3k 15 ๏ƒน . 15 ๏€ซ k ๏ƒบ๏ƒป Then AB = BA if and only if โ€“10 + 5k = 15 and โ€“9 = 6 โ€“ 3k, which happens if and only if k = 5. ๏ƒฉ 2 10. AB ๏€ฝ ๏ƒช ๏ƒซ ๏€ญ4 ๏€ญ3๏ƒน ๏ƒฉ8 6๏ƒบ๏ƒป ๏ƒช๏ƒซ5 ๏ƒฉ1 11. AD ๏€ฝ ๏ƒช๏ƒช1 ๏ƒช๏ƒซ1 1 2 4 1๏ƒน ๏ƒฉ 2 3๏ƒบ๏ƒบ ๏ƒช๏ƒช 0 5๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 0 3 0 0๏ƒน ๏ƒฉ2 0 ๏ƒบ๏ƒบ ๏€ฝ ๏ƒช๏ƒช 2 5๏ƒบ๏ƒป ๏ƒช๏ƒซ 2 3 6 12 5๏ƒน 15๏ƒบ๏ƒบ 25๏ƒบ๏ƒป ๏ƒฉ2 DA ๏€ฝ ๏ƒช๏ƒช 0 ๏ƒช๏ƒซ 0 0 3 0 0๏ƒน ๏ƒฉ1 0๏ƒบ๏ƒบ ๏ƒช๏ƒช1 5๏ƒบ๏ƒป ๏ƒช๏ƒซ1 1 2 4 1๏ƒน ๏ƒฉ 2 3๏ƒบ๏ƒบ ๏€ฝ ๏ƒช๏ƒช 3 5๏ƒบ๏ƒป ๏ƒช๏ƒซ 5 2 6 20 2๏ƒน 9 ๏ƒบ๏ƒบ 25๏ƒบ๏ƒป 4๏ƒน ๏ƒฉ 1 ๏€ฝ 5๏ƒบ๏ƒป ๏ƒช๏ƒซ ๏€ญ2 ๏€ญ7 ๏ƒน ๏ƒฉ 2 , AC ๏€ฝ ๏ƒช ๏ƒบ 14 ๏ƒป ๏ƒซ ๏€ญ4 ๏€ญ3๏ƒน ๏ƒฉ5 6 ๏ƒบ๏ƒป ๏ƒช๏ƒซ3 ๏€ญ2 ๏ƒน ๏ƒฉ 1 ๏€ฝ 1๏ƒบ๏ƒป ๏ƒช๏ƒซ ๏€ญ2 ๏€ญ7 ๏ƒน 14 ๏ƒบ๏ƒป Right-multiplication (that is, multiplication on the right) by the diagonal matrix D multiplies each column of A by the corresponding diagonal entry of D. Left-multiplication by D multiplies each row of A by the corresponding diagonal entry of D. To make AB = BA, one can take B to be a multiple of I3. For instance, if B = 4I3, then AB and BA are both the same as 4A. 12. Consider B = [b1 b2]. To make AB = 0, one needs Ab1 = 0 and Ab2 = 0. By inspection of A, a suitable ๏ƒฉ 2๏ƒน ๏ƒฉ2๏ƒน ๏ƒฉ2 6๏ƒน b1 is ๏ƒช ๏ƒบ , or any multiple of ๏ƒช ๏ƒบ . Example: B ๏€ฝ ๏ƒช ๏ƒบ. ๏ƒซ 1๏ƒป ๏ƒซ 1๏ƒป ๏ƒซ 1 3๏ƒป 13. Use the definition of AB written in reverse order: [Ab1 ๏ƒ— ๏ƒ— ๏ƒ— Abp] = A[b1 ๏ƒ— ๏ƒ— ๏ƒ— bp]. Thus [Qr1 ๏ƒ— ๏ƒ— ๏ƒ— Qrp] = QR, when R = [r1 ๏ƒ— ๏ƒ— ๏ƒ— rp]. 14. By definition, UQ = U[q1 ๏ƒ— ๏ƒ— ๏ƒ— q4] = [Uq1 ๏ƒ— ๏ƒ— ๏ƒ— Uq4]. From Example 6 of Section 1.8, the vector Uq1 lists the total costs (material, labor, and overhead) corresponding to the amounts of products B and C specified in the vector q1. That is, the first column of UQ lists the total costs for materials, labor, and overhead used to manufacture products B and C during the first quarter of the year. Columns 2, 3, and 4 of UQ list the total amounts spent to manufacture B and C during the 2nd, 3rd, and 4th quarters, respectively. 15. a. False. See the definition of AB. b. False. The roles of A and B should be reversed in the second half of the statement. See the box after Example 3. c. True. See Theorem 2(b), read right to left. d. True. See Theorem 3(b), read right to left. e. False. The phrase โ€œin the same orderโ€ should be โ€œin the reverse order.โ€ See the box after Theorem 3. Copyright ยฉ 2016 Pearson Education, Inc. 2-4 CHAPTER 2 โ€ข Matrix Algebra 16. a. False. AB must be a 3ร—3 matrix, but the formula for AB implies that it is 3ร—1. The plus signs should be just spaces (between columns). This is a common mistake. b. True. See the box after Example 6. c. False. The left-to-right order of B and C cannot be changed, in general. d. False. See Theorem 3(d). e. True. This general statement follows from Theorem 3(b). 2 ๏€ญ1๏ƒน ๏ƒฉ ๏€ญ1 ๏€ฝ AB ๏€ฝ ๏› Ab1 Ab 2 Ab3 ๏ , the first column of B satisfies the equation 17. Since ๏ƒช 3๏ƒบ๏ƒป ๏ƒซ 6 ๏€ญ9 ๏ƒฉ7 ๏ƒน ๏ƒฉ 1 ๏€ญ2 ๏€ญ1๏ƒน ๏ƒฉ 1 0 7 ๏ƒน ๏ƒฉ ๏€ญ1๏ƒน Ax ๏€ฝ ๏ƒช ๏ƒบ . Row reduction: ๏› A Ab1 ๏ ~ ๏ƒช ~๏ƒช . So b1 = ๏ƒช ๏ƒบ . Similarly, ๏ƒบ ๏ƒบ 5 6 ๏ƒป ๏ƒซ0 1 4 ๏ƒป ๏ƒซ4๏ƒป ๏ƒซ ๏€ญ2 ๏ƒซ 6๏ƒป ๏›A ๏ƒฉ 1 Ab 2 ๏ ~ ๏ƒช ๏ƒซ ๏€ญ2 ๏€ญ2 5 2๏ƒน ๏ƒฉ 1 ~ ๏€ญ9๏ƒบ๏ƒป ๏ƒช๏ƒซ0 0 1 ๏€ญ 8๏ƒน ๏ƒฉ ๏€ญ8๏ƒน and b2 = ๏ƒช ๏ƒบ . ๏ƒบ ๏€ญ5๏ƒป ๏ƒซ ๏€ญ5๏ƒป Note: An alternative solution of Exercise 17 is to row reduce [A Ab1 Ab2] with one sequence of row operations. This observation can prepare the way for the inversion algorithm in Section 2.2. 18. The first two columns of AB are Ab1 and Ab2. They are equal since b1 and b2 are equal. 19. (A solution is in the text). Write B = [b1 b2 b3]. By definition, the third column of AB is Ab3. By hypothesis, b3 = b1 + b2. So Ab3 = A(b1 + b2) = Ab1 + Ab2, by a property of matrix-vector multiplication. Thus, the third column of AB is the sum of the first two columns of AB. 20. The second column of AB is also all zeros because Ab2 = A0 = 0. 21. Let bp be the last column of B. By hypothesis, the last column of AB is zero. Thus, Abp = 0. However, bp is not the zero vector, because B has no column of zeros. Thus, the equation Abp = 0 is a linear dependence relation among the columns of A, and so the columns of A are linearly dependent. Note: The text answer for Exercise 21 is, โ€œThe columns of A are linearly dependent. Why?โ€ The Study Guide supplies the argument above in case a student needs help. 22. If the columns of B are linearly dependent, then there exists a nonzero vector x such that Bx = 0. From this, A(Bx) = A0 and (AB)x = 0 (by associativity). Since x is nonzero, the columns of AB must be linearly dependent. 23. If x satisfies Ax = 0, then CAx = C0 = 0 and so Inx = 0 and x = 0. This shows that the equation Ax = 0 has no free variables. So every variable is a basic variable and every column of A is a pivot column. (A variation of this argument could be made using linear independence and Exercise 30 in Section 1.7.) Since each pivot is in a different row, A must have at least as many rows as columns. 24. Take any b in ๏’ . By hypothesis, ADb = Imb = b. Rewrite this equation as A(Db) = b. Thus, the m vector x = Db satisfies Ax = b. This proves that the equation Ax = b has a solution for each b in ๏’ . By Theorem 4 in Section 1.4, A has a pivot position in each row. Since each pivot is in a different column, A must have at least as many columns as rows. m Copyright ยฉ 2016 Pearson Education, Inc. 2.1 โ€ข Solutions 2-5 25. By Exercise 23, the equation CA = In implies that (number of rows in A) > (number of columns), that is, m > n. By Exercise 24, the equation AD = Im implies that (number of rows in A) < (number of columns), that is, m 1. Note: Exercise 23 is good for mathematics and computer science students. The solution of Exercise 23 in the Study Guide shows students how to use the principle of induction. The Study Guide also has an appendix on โ€œThe Principle of Induction,โ€ at the end of Section 2.4. The text presents more applications of induction in Section 3.2 and in the Supplementary Exercises for Chapter 3. ๏ƒฉ1 ๏ƒช ๏ƒช1 24. Let An ๏€ฝ ๏ƒช1 ๏ƒช ๏ƒช๏ ๏ƒช๏ƒซ1 ๏Œ 0 0 1 1 0 1 1 ๏ 1 ๏Œ 0๏ƒน ๏ƒฉ 1 ๏ƒบ ๏ƒช 0๏ƒบ ๏ƒช ๏€ญ1 0 ๏ƒบ , Bn ๏€ฝ ๏ƒช 0 ๏ƒบ ๏ƒช ๏ƒบ ๏ƒช ๏ ๏ƒช๏ƒซ 0 1๏ƒบ๏ƒป 0 0 1 ๏€ญ1 0 1 ๏ ๏Œ ๏Œ ๏ ๏€ญ1 0๏ƒน 0 ๏ƒบ๏ƒบ 0๏ƒบ . ๏ƒบ ๏ƒบ 1๏ƒบ๏ƒป By direct computation A2B2 = I2. Assume that for n = k, the matrix AkBk is Ik, and write ๏ƒฉ1 Ak ๏€ซ1 ๏€ฝ ๏ƒช ๏ƒซv ๏ƒฉ1 0T ๏ƒน ๏ƒบ and Bk ๏€ซ1 ๏€ฝ ๏ƒช Ak ๏ƒป ๏ƒซw 0T ๏ƒน ๏ƒบ Bk ๏ƒป where v and w are in ๏’ , vT = [1 1 ๏ƒ— ๏ƒ— ๏ƒ— 1], and wT = [โ€“1 0 ๏ƒ— ๏ƒ— ๏ƒ— 0]. Then k ๏ƒฉ1 Ak ๏€ซ1 Bk ๏€ซ1 ๏€ฝ ๏ƒช ๏ƒซv 0T ๏ƒน ๏ƒฉ 1 ๏ƒบ๏ƒช Ak ๏ƒป ๏ƒซ w T 0T ๏ƒน ๏ƒฉ 1 ๏€ซ 0 w ๏ƒบ๏€ฝ๏ƒช Bk ๏ƒป ๏ƒช๏ƒซ v ๏€ซ Ak w 0T ๏€ซ 0T Bk ๏ƒน ๏ƒฉ1 ๏ƒบ๏€ฝ๏ƒช v0T ๏€ซ Ak Bk ๏ƒบ๏ƒป ๏ƒซ 0 0T ๏ƒน ๏ƒบ ๏€ฝ I k ๏€ซ1 Ik ๏ƒป The (2,1)-entry is 0 because v equals the first column of Ak., and Akw is โ€“1 times the first column of Ak. By the principle of induction, AnBn = In for all n > 2. Since An and Bn are square, the IMT shows that these matrices are invertible, and Bn ๏€ฝ An๏€ญ1. Copyright ยฉ 2016 Pearson Education, Inc. 2-28 CHAPTER 2 โ€ข Matrix Algebra Note: An induction proof can also be given using partitions with the form shown below. The details are slightly more complicated. ๏ƒฉ Ak Ak ๏€ซ1 ๏€ฝ ๏ƒช T ๏ƒซv 0๏ƒน ๏ƒฉ Bk ๏ƒบ and Bk ๏€ซ1 ๏€ฝ ๏ƒช T 1๏ƒป ๏ƒซw ๏ƒฉ Ak Ak ๏€ซ1 Bk ๏€ซ1 ๏€ฝ ๏ƒช T ๏ƒซv 0 ๏ƒน ๏ƒฉ Bk ๏ƒบ๏ƒช 1 ๏ƒป ๏ƒซ wT 0๏ƒน ๏ƒบ 1๏ƒป 0 ๏ƒน ๏ƒฉ Ak Bk ๏€ซ 0w T ๏ƒบ๏€ฝ๏ƒช 1 ๏ƒป ๏ƒซ๏ƒช vT Bk ๏€ซ wT Ak 0 ๏€ซ 0 ๏ƒน ๏ƒฉ I k ๏ƒบ๏€ฝ๏ƒช T vT 0 ๏€ซ 1 ๏ƒป๏ƒบ ๏ƒซ0 0๏ƒน ๏ƒบ ๏€ฝ I k ๏€ซ1 1๏ƒป The (2,1)-entry is 0T because vT times a column of Bk equals the sum of the entries in the column, and all of such sums are zero except the last, which is 1. So vTBk is the negative of wT. By the principle of induction, AnBn = In for all n > 2. Since An and Bn are square, the IMT shows that these matrices are invertible, and Bn ๏€ฝ An๏€ญ1. 25. First, visualize a partition of A as a 2ร—2 blockโ€“diagonal matrix, as below, and then visualize the (2,2)-block itself as a block-diagonal matrix. That is, ๏ƒฉ1 ๏ƒช3 ๏ƒช A ๏€ฝ ๏ƒช0 ๏ƒช ๏ƒช0 ๏ƒช๏ƒซ 0 2 0 0 5 0 0 2 0 0 0 0 0 0 7 5 0๏ƒน 0 ๏ƒบ๏ƒบ ๏ƒฉ A11 0๏ƒบ ๏€ฝ ๏ƒช ๏ƒบ ๏ƒช 0 8๏ƒบ ๏ƒซ 6 ๏ƒบ๏ƒป ๏ƒฉ2 0๏ƒน , where A22 ๏€ฝ ๏ƒช๏ƒช 0 A22 ๏ƒบ๏ƒป๏ƒบ ๏ƒช๏ƒซ 0 ๏ƒฉ 3 Observe that B is invertible and Bโ€“1 = ๏ƒช ๏ƒซ ๏€ญ2.5 0 ๏ƒฉ.5 ๏ƒน ๏ƒฉ.5 ๏ƒช ๏€ญ1 ๏€ญ4 ๏ƒบ๏ƒบ ๏€ฝ ๏ƒช 0 3 invertible, and A22 ๏€ฝ ๏ƒช ๏ƒช 0 ๏ƒช ๏ƒบ ๏ƒช๏ƒซ 0 2.5 3.5 ๏€ญ ๏ƒซ ๏ƒป ๏ƒฉ A๏€ญ1 A ๏€ฝ ๏ƒช 11 ๏ƒช๏ƒซ 0 ๏€ญ1 0 .5 0 0 0 3 ๏€ญ2.5 0๏ƒน B ๏ƒบ๏ƒป ๏€ญ4 ๏ƒน . By Exercise 13, the block diagonal matrix A22 is 3.5๏ƒบ๏ƒป 0 3 ๏€ญ2.5 0๏ƒน ๏€ญ4 ๏ƒบ๏ƒบ 3.5๏ƒบ๏ƒป ๏ƒฉ ๏€ญ5 Next, observe that A11 is also invertible, with inverse ๏ƒช ๏ƒซ 3 and its inverse is block diagonal: ๏ƒฉ ๏€ญ5 2 ๏ƒช 3 ๏€ญ1 0 ๏ƒน ๏ƒช ๏ƒบ๏€ฝ๏ƒช ๏€ญ1 A22 ๏ƒบ๏ƒป ๏ƒช 0 ๏ƒช ๏ƒช ๏ƒซ 0๏ƒน ๏ƒฉ2 8๏ƒบ๏ƒบ ๏€ฝ ๏ƒช 0 6 ๏ƒบ๏ƒป ๏ƒซ 0 7 5 ๏ƒน ๏ƒฉ ๏€ญ5 ๏ƒบ ๏ƒช ๏ƒบ ๏ƒช 3 0 ๏ƒบ๏€ฝ๏ƒช 0 ๏ƒบ ๏€ญ4 ๏ƒบ ๏ƒช๏ƒช 0 3.5๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 2๏ƒน . By Exercise 13, A itself is invertible, ๏€ญ1๏ƒบ๏ƒป 2 0 0 ๏€ญ1 0 0 0 .5 0 0 0 3 0 0 ๏€ญ2.5 0๏ƒน 0 ๏ƒบ๏ƒบ 0๏ƒบ ๏ƒบ ๏€ญ4 ๏ƒบ 3.5๏ƒบ๏ƒป 26. [M] This exercise and the next, which involve large matrices, are more appropriate for MATLAB, Maple, and Mathematica, than for the graphic calculators. a. Display the submatrix of A obtained from rows 15 to 20 and columns 5 to 10. MATLAB: A(15:20, 5:10) Maple: submatrix(A, 15..20, 5..10) Mathematica: Take[ A, {15,20}, {5,10} ] Copyright ยฉ 2016 Pearson Education, Inc. 2.5 โ€ข Solutions 2-29 b. Insert a 5ร—10 matrix B into rows 10 to 14 and columns 20 to 29 of matrix A: MATLAB: A(10:14, 20:29) = B ; The semicolon suppresses output display. Maple: copyinto(B, A, 10, 20): Mathematica: For [ i=10, i<=14, i++, For [ j=20, j p. For j = 1, โ€ฆ, q, the vector aj is in W. Since the columns of B span W, the vector aj is in the column space of B. That is, aj = Bcj for p some vector cj of weights. Note that cj is in ๏’ because B has p columns. b. Let C = [c1 ๏ƒ— ๏ƒ— ๏ƒ— cq]. Then C is a pร—q matrix because each of the q columns is in ๏’ . By hypothesis, q is larger than p, so C has more columns than rows. By a theorem, the columns of C q are linearly dependent and there exists a nonzero vector u in ๏’ such that Cu = 0. c. From part (a) and the definition of matrix multiplication A = [a1 ๏ƒ— ๏ƒ— ๏ƒ— aq] = [Bc1 ๏ƒ— ๏ƒ— ๏ƒ— Bcq] = BC. From part (b), Au = (BC ) u = B(Cu) = B0 = 0. Since u is nonzero, the columns of A are linearly dependent. p Copyright ยฉ 2016 Pearson Education, Inc. Chapter 2 โ€ข Supplementary Exercises 2-71 28. If ๏ contained more vectors than ๏‚ , then ๏ would be linearly dependent, by Exercise 27, because ๏‚ spans W. Repeat the argument with ๏‚ and ๏ interchanged to conclude that ๏‚ cannot contain more vectors than ๏ . 29. [M] Apply the matrix command ref or rref to the matrix [v1 v2 x]: 19 ๏ƒน ๏ƒฉ 1 0 ๏€ญ1.667 ๏ƒน ๏ƒฉ 11 14 ๏ƒช ๏€ญ5 ๏€ญ8 ๏€ญ13๏ƒบ ๏ƒช 0 1 2.667 ๏ƒบ๏ƒบ ๏ƒช ๏ƒบ~๏ƒช ๏ƒช 10 13 18๏ƒบ ๏ƒช 0 0 0 ๏ƒบ ๏ƒช ๏ƒบ ๏ƒช ๏ƒบ 15๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 0 0 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 7 10 The equation c1v1 + c2v2 = x is consistent, so x is in the subspace H. The decimal approximations suggest c1 = โ€“5/3 and c2 = 8/3, and it can be checked that these values are precise. Thus, the ๏‚ -coordinate of x is (โ€“5/3, 8/3). 30. [M] Apply the matrix command ref or rref to the matrix [v1 v2 v3 x]: 8 ๏€ญ9 4 ๏ƒน ๏ƒฉ 1 0 0 3๏ƒน ๏ƒฉ ๏€ญ6 ๏ƒช 4 ๏€ญ3 5 7 ๏ƒบ๏ƒบ ๏ƒช๏ƒช 0 1 0 5๏ƒบ๏ƒบ ๏ƒช ~ ๏ƒช ๏€ญ9 7 ๏€ญ8 ๏€ญ8๏ƒบ ๏ƒช 0 0 1 2 ๏ƒบ ๏ƒช ๏ƒบ ๏ƒช ๏ƒบ 3 3๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 0 0 0 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 4 ๏€ญ3 The first three columns of [v1 v2 v3 x] are pivot columns, so v1, v2 and v3 are linearly independent. Thus v1, v2 and v3 form a basis ๏‚ for the subspace H which they span. View [v1 v2 v3 x] as an augmented matrix for c1v1 + c2v2 + c3v3 = x. The reduced echelon form shows that x is in H and ๏ƒฉ 3๏ƒน ๏› x ๏๏‚ ๏€ฝ ๏ƒช๏ƒช 5๏ƒบ๏ƒบ . ๏ƒช๏ƒซ 2 ๏ƒบ๏ƒป Notes: The Study Guide for Section 2.9 contains a complete list of the statements in the Invertible Matrix Theorem that have been given so far. The format is the same as that used in Section 2.3, with three columns: statements that are logically equivalent for any mร—n matrix and are related to existence concepts, those that are equivalent only for any nร—n matrix, and those that are equivalent for any nร—p matrix and are related to uniqueness concepts. Four statements are included that are not in the textโ€™s official list of statements, to give more symmetry to the three columns. The Study Guide section also contains directions for making a review sheet for โ€œdimensionโ€ and โ€œrank.โ€ Chapter 2 SUPPLEMENTARY EXERCISES 1. a. True. If A and B are mร—n matrices, then BT has as many rows as A has columns, so ABT is defined. Also, ATB is defined because AT has m columns and B has m rows. b. False. B must have 2 columns. A has as many columns as B has rows. c. True. The ith row of A has the form (0, โ€ฆ, di, โ€ฆ, 0). So the ith row of AB is (0, โ€ฆ, di, โ€ฆ, 0)B, which is di times the ith row of B. d. False. Take the zero matrix for B. Or, construct a matrix B such that the equation Bx = 0 has nontrivial solutions, and construct C and D so that C ๏‚น D and the columns of C โ€“ D satisfy the equation Bx = 0. Then B(C โ€“ D) = 0 and BC = BD. Copyright ยฉ 2016 Pearson Education, Inc. 2-72 CHAPTER 2 โ€ข Matrix Algebra ๏ƒฉ0 0 ๏ƒน ๏ƒฉ 1 0๏ƒน e. False. Counterexample: A = ๏ƒช and C = ๏ƒช ๏ƒบ ๏ƒบ. ๏ƒซ0 1๏ƒป ๏ƒซ0 0 ๏ƒป f. False. (A + B)(A โ€“ B) = A2 โ€“ AB + BA โ€“ B2. This equals A2 โ€“ B2 if and only if A commutes with B. g. True. An nร—n replacement matrix has n + 1 nonzero entries. The nร—n scale and interchange matrices have n nonzero entries. h. True. The transpose of an elementary matrix is an elementary matrix of the same type. i. True. An nร—n elementary matrix is obtained by a row operation on In. j. False. Elementary matrices are invertible, so a product of such matrices is invertible. But not every square matrix is invertible. k. True. If A is 3ร—3 with three pivot positions, then A is row equivalent to I3. l. False. A must be square in order to conclude from the equation AB = I that A is invertible. m. False. AB is invertible, but (AB)โ€“1 = Bโ€“1Aโ€“1, and this product is not always equal to Aโ€“1Bโ€“1. n. True. Given AB = BA, left-multiply by Aโ€“1 to get B = Aโ€“1BA, and then right-multiply by Aโ€“1 to obtain BAโ€“1 = Aโ€“1B. o. False. The correct equation is (rA)โ€“1 = rโ€“1Aโ€“1, because (rA)(rโ€“1Aโ€“1) = (rrโ€“1)(AAโ€“1) = 1โ‹…I = I. ๏ƒฉ 1๏ƒน p. True. If the equation Ax = ๏ƒช0 ๏ƒบ has a unique solution, then there are no free variables in this equation, ๏ƒช ๏ƒบ ๏ƒช๏ƒซ0 ๏ƒบ๏ƒป which means that A must have three pivot positions (since A is 3ร—3). By the Invertible Matrix Theorem, A is invertible. 2. C = (C โ€“ 1 ) โ€“1 = ๏ƒฉ0 3. A ๏€ฝ ๏ƒช 1 ๏ƒช ๏ƒช๏ƒซ 0 0 0 1 1 ๏ƒฉ 7 ๏€ญ2 ๏ƒซ๏ƒช ๏€ญ6 0๏ƒน 0๏ƒบ , ๏ƒบ 0 ๏ƒบ๏ƒป ๏ƒฉ0 A ๏€ฝ A ๏ƒ— A ๏€ฝ ๏ƒช1 ๏ƒช ๏ƒซ๏ƒช0 3 2 ๏€ญ5๏ƒน ๏ƒฉ ๏€ญ7 / 2 ๏€ฝ 4๏ƒป๏ƒบ ๏ƒซ๏ƒช 3 5 / 2๏ƒน ๏€ญ2 ๏ƒป๏ƒบ ๏ƒฉ0 0 0 ๏ƒน ๏ƒฉ0 0 0 ๏ƒน ๏ƒฉ0 A ๏€ฝ ๏ƒช1 0 0 ๏ƒบ ๏ƒช1 0 0 ๏ƒบ ๏€ฝ ๏ƒช 0 ๏ƒช ๏ƒบ๏ƒช ๏ƒบ ๏ƒช ๏ƒช๏ƒซ0 1 0 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 1 0 ๏ƒบ๏ƒป ๏ƒช๏ƒซ1 0 0๏ƒน ๏ƒฉ0 0 0 ๏ƒน ๏ƒฉ0 0 0๏ƒน 0 0๏ƒบ ๏ƒช0 0 0 ๏ƒบ ๏€ฝ ๏ƒช0 0 0๏ƒบ ๏ƒบ๏ƒช ๏ƒบ ๏ƒช ๏ƒบ 1 0 ๏ƒป๏ƒบ ๏ƒซ๏ƒช1 0 0 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 0 0 ๏ƒป๏ƒบ 2 0 0 0 0๏ƒน 0๏ƒบ ๏ƒบ 0 ๏ƒบ๏ƒป Next, ( I ๏€ญ A)( I ๏€ซ A ๏€ซ A2 ) ๏€ฝ I ๏€ซ A ๏€ซ A2 ๏€ญ A( I ๏€ซ A ๏€ซ A2 ) ๏€ฝ I ๏€ซ A ๏€ซ A2 ๏€ญ A ๏€ญ A2 ๏€ญ A3 ๏€ฝ I ๏€ญ A3 . Since A3 = 0, ( I ๏€ญ A)( I ๏€ซ A ๏€ซ A2 ) ๏€ฝ I . 4. From Exercise 3, the inverse of I โ€“ A is probably I + A + A2 + โ‹… โ‹… โ‹… + Anโ€“1. To verify this, compute ( I ๏€ญ A)( I ๏€ซ A ๏€ซ ๏Œ ๏€ซ An ๏€ญ1 ) ๏€ฝ I ๏€ซ A ๏€ซ ๏Œ ๏€ซ An ๏€ญ1 ๏€ญ A( I ๏€ซ A ๏€ซ ๏Œ ๏€ซ An ๏€ญ1 ) ๏€ฝ I ๏€ญ AAn ๏€ญ1 ๏€ฝ I ๏€ญ An If An = 0, then the matrix B = I + A + A2 + โ‹… โ‹… โ‹… + Anโ€“1 satisfies (I โ€“ A)B = I. Since I โ€“ A and B are square, they are invertible by the Invertible Matrix Theorem, and B is the inverse of I โ€“ A. 5. A2 = 2A โ€“ I. Multiply by A: A3 = 2A2 โ€“ A. Substitute A2 = 2A โ€“ I: A3 = 2(2A โ€“ I ) โ€“ A = 3A โ€“ 2I. Multiply by A again: A4 = A(3A โ€“ 2I) = 3A2 โ€“ 2A. Substitute the identity A2 = 2A โ€“ I again. Finally, A4 = 3(2A โ€“ I) โ€“ 2A = 4A โ€“ 3I. Copyright ยฉ 2016 Pearson Education, Inc. Chapter 2 ๏ƒฉ1 6. Let A ๏€ฝ ๏ƒช ๏ƒซ0 0๏ƒน ๏ƒฉ0 and B ๏€ฝ ๏ƒช ๏ƒบ ๏€ญ1๏ƒป ๏ƒซ1 โ€ข Supplementary Exercises 2-73 1๏ƒน ๏ƒฉ 0 1๏ƒน . By direct computation, A2 = I, B2 = I, and AB = ๏ƒช ๏ƒบ = โ€“ BA. ๏ƒบ 0๏ƒป ๏ƒซ๏€ญ1 0๏ƒป 7. (Partial answer in Study Guide) Since Aโ€“1B is the solution of AX = B, row reduction of [A B] to [I X] will produce X = Aโ€“1B. See Exercise 12 in Section 2.2. ๏›A ๏ƒฉ1 ~ ๏ƒช๏ƒช0 ๏ƒซ๏ƒช0 ๏ƒฉ1 B ๏ ๏€ฝ ๏ƒช๏ƒช 2 ๏ƒช๏ƒซ 1 3 8 1 3 ๏€ญ3 ๏€ญ6 0 ๏€ญ5 1 ๏€ญ3 1 3 5๏ƒน ๏ƒฉ 1 5๏ƒบ๏ƒบ ~ ๏ƒช๏ƒช0 4๏ƒบ๏ƒป ๏ƒช๏ƒซ0 3 ๏€ญ2 ๏€ญ1 8 ๏€ญ5 ๏€ญ3 5๏ƒน ๏ƒฉ 1 1๏ƒบ๏ƒบ ~ ๏ƒช๏ƒช0 ๏€ญ3๏ƒป๏ƒบ ๏ƒซ๏ƒช0 3 1 0 0 37 9 0 1 ๏€ญ5 29 ๏ƒน ๏ƒฉ 1 10 ๏ƒบ๏ƒบ ~ ๏ƒช๏ƒช0 ๏€ญ3๏ƒป๏ƒบ ๏ƒซ๏ƒช0 3 8 4 11 2 5 ๏€ญ3 7 6 5๏ƒน ๏ƒฉ 1 ๏€ญ5๏ƒบ๏ƒบ ~ ๏ƒช๏ƒช0 ๏€ญ1๏ƒบ๏ƒป ๏ƒช๏ƒซ0 3 1 ๏€ญ2 0 1 0 0 10 9 0 1 ๏€ญ5 ๏ƒฉ1 8. By definition of matrix multiplication, the matrix A satisfies A ๏ƒช ๏ƒซ3 ๏ƒฉ1 Right-multiply both sides by the inverse of ๏ƒช ๏ƒซ3 1๏ƒน ๏ƒฉ1 3๏ƒน ๏ƒฉ 7 ๏€ญ2 ๏ƒน ๏ƒฉ ๏€ญ2 A๏€ฝ ๏ƒช ๏€ฝ๏ƒช . ๏ƒบ ๏ƒช ๏ƒบ 1๏ƒป ๏ƒซ 4 ๏€ญ1๏ƒบ๏ƒป ๏ƒซ1 1๏ƒป ๏ƒซ ๏€ญ3 8 3 ๏€ญ5 ๏€ญ3 ๏€ญ6 7 5๏ƒน 1๏ƒบ๏ƒบ ๏€ญ5๏ƒบ๏ƒป ๏€ญ1๏ƒน ๏ƒฉ 10 ๏ƒบ โ€“1 10๏ƒบ . Thus, A B = ๏ƒช๏ƒช 9 ๏€ญ3๏ƒป๏ƒบ ๏ƒซ๏ƒช ๏€ญ5 ๏€ญ1๏ƒน 10 ๏ƒบ๏ƒบ . ๏€ญ3๏ƒป๏ƒบ 2๏ƒน ๏ƒฉ1 3๏ƒน ๏€ฝ . 7 ๏ƒบ๏ƒป ๏ƒช๏ƒซ1 1๏ƒบ๏ƒป 2๏ƒน . The left side becomes A. Thus, 7 ๏ƒบ๏ƒป ๏ƒฉ 5 4๏ƒน ๏ƒฉ 7 3๏ƒน โ€“1 and B ๏€ฝ ๏ƒช 9. Given AB ๏€ฝ ๏ƒช ๏ƒบ ๏ƒบ , notice that ABB = A. Since det B = 7 โ€“ 6 =1, 2 3 2 1 ๏€ญ ๏ƒซ ๏ƒป ๏ƒซ ๏ƒป ๏ƒฉ 1 ๏€ญ3๏ƒน ๏ƒฉ 5 4 ๏ƒน ๏ƒฉ 1 ๏€ญ3๏ƒน ๏ƒฉ ๏€ญ3 13๏ƒน B ๏€ญ1 ๏€ฝ ๏ƒช and A ๏€ฝ ( AB) B ๏€ญ1 ๏€ฝ ๏ƒช ๏€ฝ ๏ƒบ ๏ƒบ๏ƒช 7๏ƒป 7 ๏ƒบ๏ƒป ๏ƒช๏ƒซ ๏€ญ8 27 ๏ƒบ๏ƒป ๏ƒซ ๏€ญ2 ๏ƒซ ๏€ญ2 3๏ƒป ๏ƒซ ๏€ญ2 Note: Variants of this question make simple exam questions. 10. Since A is invertible, so is AT, by the Invertible Matrix Theorem. Then ATA is the product of invertible matrices and so is invertible. Thus, the formula (ATA)โ€“1AT makes sense. By Theorem 6 in Section 2.2, (ATA)โ€“1โ‹…AT = Aโ€“1(AT)โ€“1AT = Aโ€“1I = Aโ€“1 An alternative calculation: (ATA)โ€“1ATโ‹…A = (ATA)โ€“1(ATA) = I. Since A is invertible, this equation shows that its inverse is (ATA)โ€“1AT. ๏ƒฉ c0 ๏ƒน ๏ƒช ๏ƒบ n ๏€ญ1 11. a. For i = 1,โ€ฆ, n, p(xi) = c0 + c1xi + โ‹… โ‹… โ‹… + cn ๏€ญ1 x i = row i (V ) ๏ƒ— ๏ƒช ๏ ๏ƒบ ๏€ฝ row i (V )c . ๏ƒช๏ƒซcn ๏€ญ1 ๏ƒบ๏ƒป By a property of matrix multiplication, shown after Example 6 in Section 2.1, and the fact that c was chosen to satisfy Vc= y, row i (V )c ๏€ฝ row i (Vc) ๏€ฝ row i ( y ) ๏€ฝ yi Thus, p(xi) = yi. To summarize, the entries in Vc are the values of the polynomial p(x) at x1, โ€ฆ, xn. b. Suppose x1, โ€ฆ, xn are distinct, and suppose Vc = 0 for some vector c. Then the entries in c are the coefficients of a polynomial whose value is zero at the distinct points x1, …, xn. However, a nonzero polynomial of degree n โ€“ 1 cannot have n zeros, so the polynomial must be identically zero. That is, the entries in c must all be zero. This shows that the columns of V are linearly independent. Copyright ยฉ 2016 Pearson Education, Inc. 2-74 CHAPTER 2 โ€ข Matrix Algebra c. (Solution in Study Guide) When x1, โ€ฆ, xn are distinct, the columns of V are linearly independent, by n (b). By the Invertible Matrix Theorem, V is invertible and its columns span ๏’ . So, for every y = (y1, โ€ฆ, yn) in ๏’ , there is a vector c such that Vc = y. Let p be the polynomial whose coefficients are listed in c. Then, by (a), p is an interpolating polynomial for (x1, y1), โ€ฆ, (xn, yn). n 12. If A = LU, then col1(A) = Lโ‹…col1(U). Since col1(U) has a zero in every entry except possibly the first, Lโ‹…col1(U) is a linear combination of the columns of L in which all weights except possibly the first are zero. So col1(A) is a multiple of col1(L). Similarly, col2(A) = Lโ‹…col2(U), which is a linear combination of the columns of L using the first two entries in col2(U) as weights, because the other entries in col2(U) are zero. Thus col2(A) is a linear combination of the first two columns of L. 13. a. P2 = (uuT)(uuT) = u(uTu)uT = u(1)uT = P, because u satisfies uTu = 1. b. PT = (uuT)T = uTTuT = uuT = P c. Q2 = (I โ€“ 2P)(I โ€“ 2P) = I โ€“ I(2P) โ€“ 2PI + 2P(2P) = I โ€“ 4P + 4P2 = I, because of part (a). ๏ƒฉ0๏ƒน 14. Given u ๏€ฝ ๏ƒช 0 ๏ƒบ , define P and Q as in Exercise 13 by ๏ƒช ๏ƒบ ๏ƒช๏ƒซ1 ๏ƒบ๏ƒป ๏ƒฉ0 ๏ƒน ๏ƒฉ0 0 0๏ƒน ๏ƒฉ1 ๏ƒช ๏ƒบ ๏ƒช ๏ƒบ T P ๏€ฝ uu ๏€ฝ 0 ๏› 0 0 1๏ ๏€ฝ 0 0 0 , Q ๏€ฝ I ๏€ญ 2 P ๏€ฝ ๏ƒช 0 ๏ƒช ๏ƒบ ๏ƒช ๏ƒบ ๏ƒช ๏ƒช๏ƒซ1 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 0 1 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 ๏ƒฉ1 ๏ƒน ๏ƒฉ0 ๏ƒช ๏ƒบ If x ๏€ฝ 5 , then Px ๏€ฝ ๏ƒช 0 ๏ƒช ๏ƒบ ๏ƒช ๏ƒซ๏ƒช 0 ๏ƒซ๏ƒช3๏ƒป๏ƒบ 0 0 0 0 ๏ƒน ๏ƒฉ1 ๏ƒน ๏ƒฉ 0 ๏ƒน 0 ๏ƒบ ๏ƒช5 ๏ƒบ ๏€ฝ ๏ƒช 0 ๏ƒบ ๏ƒบ๏ƒช ๏ƒบ ๏ƒช ๏ƒบ 1 ๏ƒป๏ƒบ ๏ƒซ๏ƒช3๏ƒป๏ƒบ ๏ƒซ๏ƒช 3 ๏ƒป๏ƒบ ๏ƒฉ1 and Qx ๏€ฝ ๏ƒช0 ๏ƒช ๏ƒซ๏ƒช0 0๏ƒน ๏ƒฉ0 ๏ƒบ 0 ๏€ญ 2 ๏ƒช0 ๏ƒบ ๏ƒช ๏ƒช๏ƒซ 0 1 ๏ƒบ๏ƒป 0 1 0 0 1 0 0 0 0 0 ๏ƒน ๏ƒฉ1 0 ๏ƒบ ๏€ฝ ๏ƒช0 ๏ƒบ ๏ƒช 1 ๏ƒบ๏ƒป ๏ƒช๏ƒซ 0 0 1 0 0๏ƒน 0๏ƒบ ๏ƒบ ๏€ญ1๏ƒบ๏ƒป 0 ๏ƒน ๏ƒฉ 1๏ƒน ๏ƒฉ 1๏ƒน 0 ๏ƒบ ๏ƒช5 ๏ƒบ ๏€ฝ ๏ƒช 5 ๏ƒบ . ๏ƒบ๏ƒช ๏ƒบ ๏ƒช ๏ƒบ ๏€ญ1๏ƒป๏ƒบ ๏ƒซ๏ƒช 3๏ƒป๏ƒบ ๏ƒซ๏ƒช ๏€ญ3๏ƒป๏ƒบ 15. Left-multiplication by an elementary matrix produces an elementary row operation: B ~ E1 B ~ E2 E1 B ~ E3 E2 E1 B ๏€ฝ C , so B is row equivalent to C. Since row operations are reversible, C is row equivalent to B. (Alternatively, show C being changed into B by row operations using the inverse of the Ei .) 16. Since A is not invertible, there is a nonzero vector v in ๏’ such that Av = 0. Place n copies of v into an nร—n matrix B. Then AB = A[v ๏ƒ— ๏ƒ— ๏ƒ— v] = [Av ๏ƒ— ๏ƒ— ๏ƒ— Av] = 0. n 17. Let A be a 6ร—4 matrix and B a 4ร—6 matrix. Since B has more columns than rows, its six columns are linearly dependent and there is a nonzero x such that Bx = 0. Thus ABx = A0 = 0. This shows that the matrix AB is not invertible, by the IMT. (Basically the same argument was used to solve Exercise 22 in Section 2.1.) Note: (In the Study Guide) It is possible that BA is invertible. For example, let C be an invertible 4ร—4 matrix ๏ƒฉC ๏ƒน and construct A ๏€ฝ ๏ƒช ๏ƒบ and B ๏€ฝ [C ๏€ญ1 0]. Then BA = I4, which is invertible. ๏ƒซ0๏ƒป Copyright ยฉ 2016 Pearson Education, Inc. Chapter 2 โ€ข Supplementary Exercises 2-75 18. By hypothesis, A is 5ร—3, C is 3ร—5, and AC = I3. Suppose x satisfies Ax = b. Then CAx = Cb. Since CA = I, x must be Cb. This shows that Cb is the only solution of Ax = b. ๏ƒฉ.4 .2 .3๏ƒน ๏ƒฉ .31 ๏ƒช ๏ƒบ 2 19. [M] Let A ๏€ฝ .3 .6 .3 . Then A ๏€ฝ ๏ƒช.39 ๏ƒช ๏ƒบ ๏ƒช ๏ƒช๏ƒซ.30 ๏ƒช๏ƒซ.3 .2 .4 ๏ƒบ๏ƒป calculations by computing ๏ƒฉ.2875 A ๏€ฝ A A ๏€ฝ ๏ƒช .4251 ๏ƒช ๏ƒซ๏ƒช.2874 4 2 .2834 2 .4332 .2834 .2874 ๏ƒน .4251๏ƒบ , ๏ƒบ .2875๏ƒป๏ƒบ .30 ๏ƒน .39 ๏ƒบ . Instead of computing A3 next, speed up the ๏ƒบ .31๏ƒบ๏ƒป .26 .48 .26 ๏ƒฉ.2857 A ๏€ฝ A A ๏€ฝ ๏ƒช.4285 ๏ƒช ๏ƒซ๏ƒช.2857 8 4 4 ๏ƒฉ.2857 To four decimal places, as k increases, A ๏‚ฎ ๏ƒช.4286 ๏ƒช ๏ƒช๏ƒซ.2857 k ๏ƒฉ2 / 7 A ๏‚ฎ ๏ƒช3/ 7 ๏ƒช ๏ƒช๏ƒซ 2 / 7 ๏ƒฉ0 If B ๏€ฝ ๏ƒช .1 ๏ƒช ๏ƒซ๏ƒช.9 3/ 7 2/7 .3๏ƒน .3๏ƒบ , then ๏ƒบ .4 ๏ƒป๏ƒบ .2 .6 .2 ๏ƒฉ.2024 B ๏€ฝ ๏ƒช.3707 ๏ƒช ๏ƒซ๏ƒช.4269 8 .4286 .2857 .4286 .2857 .2857 ๏ƒน .4286 ๏ƒบ , or, in rational format, ๏ƒบ .2857 ๏ƒบ๏ƒป 2 / 7๏ƒน 3 / 7๏ƒบ . ๏ƒบ 2 / 7 ๏ƒบ๏ƒป 2/7 k .2857 .2857 ๏ƒน .4285 ๏ƒบ ๏ƒบ .2857 ๏ƒป๏ƒบ .2857 .2022 .3709 .4269 ๏ƒฉ.29 B ๏€ฝ ๏ƒช.33 ๏ƒช ๏ƒซ๏ƒช.38 2 .18 .44 .38 .18 ๏ƒน ๏ƒฉ.2119 4 ๏ƒช ๏ƒบ .33 , B ๏€ฝ ๏ƒช .3663 ๏ƒบ .49 ๏ƒป๏ƒบ ๏ƒซ๏ƒช.4218 .1998 .3764 .4218 .1998๏ƒน .3663๏ƒบ , ๏ƒบ .4339๏ƒบ๏ƒป .2022๏ƒน ๏ƒฉ.2022 ๏ƒบ k .3707 . To four decimal places, as k increases, B ๏‚ฎ ๏ƒช.3708 ๏ƒบ ๏ƒช ๏ƒช๏ƒซ.4270 .4271๏ƒป๏ƒบ ๏ƒฉ18 / 89 or, in rational format, B ๏‚ฎ ๏ƒช 33/ 89 ๏ƒช ๏ƒช๏ƒซ38 / 89 k 18 / 89 33/ 89 38 / 89 .2022 .3708 .4270 .2022 ๏ƒน .3708 ๏ƒบ , ๏ƒบ .4270 ๏ƒบ๏ƒป 18 / 89 ๏ƒน 33/ 89 ๏ƒบ . ๏ƒบ 38 / 89 ๏ƒบ๏ƒป 20. [M] The 4ร—4 matrix A4 is the 4ร—4 matrix of ones, minus the 4ร—4 identity matrix. The MATLAB command is A4 = ones(4) โ€“ eye(4). For the inverse, use inv(A4). 1 1๏ƒน 1๏ƒบ๏ƒบ , 1๏ƒบ ๏ƒบ 0๏ƒป ๏ƒฉ ๏€ญ2 / 3 ๏ƒช 1/ 3 A4๏€ญ1 ๏€ฝ ๏ƒช ๏ƒช 1/ 3 ๏ƒช ๏ƒซ 1/ 3 1 1 0 1 1 1 1 1 0 1 1๏ƒน 1๏ƒบ๏ƒบ 1๏ƒบ , ๏ƒบ 1๏ƒบ 0 ๏ƒบ๏ƒป ๏ƒฉ0 ๏ƒช1 A4 ๏€ฝ ๏ƒช ๏ƒช1 ๏ƒช ๏ƒซ1 1 0 1 1 1 0 1 ๏ƒฉ0 ๏ƒช1 ๏ƒช A5 ๏€ฝ ๏ƒช 1 ๏ƒช ๏ƒช1 ๏ƒช๏ƒซ 1 1 0 1 1 1 1/ 3 ๏€ญ2 / 3 1/ 3 1/ 3 1/ 3 ๏€ญ2 / 3 1/ 3 1/ 3 ๏ƒฉ ๏€ญ3/ 4 ๏ƒช 1/ 4 ๏ƒช A5๏€ญ1 ๏€ฝ ๏ƒช 1/ 4 ๏ƒช ๏ƒช 1/ 4 ๏ƒช๏ƒซ 1/ 4 1/ 4 ๏€ญ3/ 4 1/ 4 1/ 4 1/ 4 1/ 3๏ƒน 1/ 3๏ƒบ๏ƒบ 1/ 3๏ƒบ ๏ƒบ ๏€ญ2 / 3๏ƒป 1/ 4 1/ 4 ๏€ญ3/ 4 1/ 4 1/ 4 1/ 4 1/ 4 1/ 4 ๏€ญ3/ 4 1/ 4 1/ 4 ๏ƒน 1/ 4 ๏ƒบ๏ƒบ 1/ 4 ๏ƒบ ๏ƒบ 1/ 4 ๏ƒบ ๏€ญ3/ 4 ๏ƒบ๏ƒป Copyright ยฉ 2016 Pearson Education, Inc. 2-76 CHAPTER 2 โ€ข ๏ƒฉ0 ๏ƒช1 ๏ƒช ๏ƒช1 A6 ๏€ฝ ๏ƒช ๏ƒช1 ๏ƒช1 ๏ƒช ๏ƒช๏ƒซ 1 Matrix Algebra 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 1๏ƒน 1๏ƒบ๏ƒบ 1๏ƒบ ๏ƒบ, 1๏ƒบ 1๏ƒบ ๏ƒบ 0 ๏ƒบ๏ƒป ๏ƒฉ ๏€ญ4 / 5 ๏ƒช 1/ 5 ๏ƒช ๏ƒช 1/ 5 ๏€ญ1 A6 ๏€ฝ ๏ƒช ๏ƒช 1/ 5 ๏ƒช 1/ 5 ๏ƒช ๏ƒช๏ƒซ 1/ 5 1/ 5 ๏€ญ4 / 5 1/ 5 1/ 5 1/ 5 1/ 5 1/ 5 1/ 5 ๏€ญ4 / 5 1/ 5 1/ 5 1/ 5 1/ 5 1/ 5 1/ 5 ๏€ญ4 / 5 1/ 5 1/ 5 1/ 5 1/ 5 1/ 5 1/ 5 ๏€ญ4 / 5 1/ 5 1/ 5๏ƒน 1/ 5๏ƒบ๏ƒบ 1/ 5๏ƒบ ๏ƒบ 1/ 5๏ƒบ 1/ 5๏ƒบ ๏ƒบ ๏€ญ4 / 5๏ƒบ๏ƒป The construction of A6 and the appearance of its inverse suggest that the inverse is related to I6. In fact, A6๏€ญ1 ๏€ซ I 6 is 1/5 times the 6ร—6 matrix of ones. Let J denotes the nร—n matrix of ones. The conjecture is: 1 ๏ƒ— J ๏€ญ In n ๏€ญ1 Proof: (Not required) Observe that J 2 = nJ and An J = (J โ€“ I ) J = J 2 โ€“ J = (n โ€“ 1) J. Now compute An((n โ€“ 1)โ€“1J โ€“ I) = (n โ€“ 1)โ€“1 An J โ€“ An = J โ€“ (J โ€“ I) = I. Since An is square, An is invertible and its inverse is (n โ€“ 1)โ€“1J โ€“ I. An = J โ€“ In and An๏€ญ1 ๏€ฝ Copyright ยฉ 2016 Pearson Education, Inc.

Document Preview (76 of 455 Pages)

User generated content is uploaded by users for the purposes of learning and should be used following SchloarOn's honor code & terms of service.
You are viewing preview pages of the document. Purchase to get full access instantly.

Shop by Category See All


Shopping Cart (0)

Your bag is empty

Don't miss out on great deals! Start shopping or Sign in to view products added.

Shop What's New Sign in