Solution Manual for Linear Algebra and Its Applications, 5th Edition
Preview Extract
2.1
SOLUTIONS
Notes: The definition here of a matrix product AB gives the proper view of AB for nearly all matrix
calculations. (The dual fact about the rows of A and the rows of AB is seldom needed, mainly because vectors
here are usually written as columns.) I assign Exercise 13 and most of Exercises 17โ22 to reinforce the
definition of AB.
Exercises 23 and 24 are used in the proof of the Invertible Matrix Theorem, in Section 2.3. Exercises
23โ25 are mentioned in a footnote in Section 2.2. A class discussion of the solutions of Exercises 23โ25 can
provide a transition to Section 2.2. Or, these exercises could be assigned after starting Section 2.2.
Exercises 27 and 28 are optional, but they are mentioned in Example 4 of Section 2.4. Outer products also
appear in Exercises 31โ34 of Section 4.6 and in the spectral decomposition of a symmetric matrix, in Section 7.1.
Exercises 29โ33 provide good training for mathematics majors.
1.
0 ๏ญ1๏น ๏ฉ ๏ญ4
0
2๏น
๏ฉ2
๏ญ2 A ๏ฝ (๏ญ2) ๏ช
๏ฝ๏ช
. Next, use B โ 2A = B + (โ2A):
๏บ
2 ๏ป ๏ซ ๏ญ8 10 ๏ญ4 ๏บ๏ป
๏ซ 4 ๏ญ5
1๏น ๏ฉ ๏ญ4
0
2 ๏น ๏ฉ 3 ๏ญ5
3๏น
๏ฉ7 ๏ญ5
B ๏ญ 2A ๏ฝ ๏ช
๏ซ๏ช
๏ฝ๏ช
.
๏บ
๏บ
6 ๏ญ7 ๏บ๏ป
๏ซ 1 ๏ญ4 ๏ญ3๏ป ๏ซ ๏ญ8 10 ๏ญ4 ๏ป ๏ซ ๏ญ7
The product AC is not defined because the number of columns of A does not match the number of rows
1 ๏ 5 ๏ซ 2 ๏ 4 ๏น ๏ฉ 1 13๏น
๏ฉ 1 2 ๏น ๏ฉ 3 5๏น ๏ฉ 1 ๏ 3 ๏ซ 2(๏ญ1)
๏ฝ๏ช
of C. CD ๏ฝ ๏ช
๏บ
๏ช
๏บ
๏บ๏ฝ๏ช
๏บ . For mental computation, the
๏ซ ๏ญ2 1๏ป ๏ซ ๏ญ1 4 ๏ป ๏ซ ๏ญ2 ๏ 3 ๏ซ 1( ๏ญ1) ๏ญ2 ๏ 5 ๏ซ 1 ๏ 4 ๏ป ๏ซ ๏ญ7 ๏ญ6 ๏ป
row-column rule is probably easier to use than the definition.
๏ฉ2
2. A ๏ซ 2 B ๏ฝ ๏ช
๏ซ4
0
๏ญ5
๏ญ1๏น
๏ฉ7
๏ซ 2๏ช
๏บ
2๏ป
๏ซ1
๏ญ5
๏ญ4
1๏น ๏ฉ 2 ๏ซ 14
๏ฝ
๏ญ3๏บ๏ป ๏ช๏ซ 4 ๏ซ 2
0 ๏ญ 10
๏ญ5 ๏ญ 8
๏ญ1 ๏ซ 2 ๏น ๏ฉ16
๏ฝ
2 ๏ญ 6 ๏บ๏ป ๏ช๏ซ 6
๏ญ10
๏ญ13
1๏น
๏ญ4 ๏บ๏ป
The expression 3C โ E is not defined because 3C has 2 columns and โE has only 1 column.
1๏น ๏ฉ 1 ๏ 7 ๏ซ 2 ๏ 1 1(๏ญ5) ๏ซ 2(๏ญ4)
1 ๏ 1 ๏ซ 2(๏ญ3) ๏น ๏ฉ 9 ๏ญ13 ๏ญ5๏น
๏ฉ 1 2 ๏น ๏ฉ7 ๏ญ5
CB ๏ฝ ๏ช
๏ฝ๏ช
๏บ
๏ช
๏บ
๏บ๏ฝ๏ช
6 ๏ญ5๏บ๏ป
๏ซ ๏ญ2 1๏ป ๏ซ 1 ๏ญ4 ๏ญ3๏ป ๏ซ ๏ญ2 ๏ 7 ๏ซ 1 ๏ 1 ๏ญ2(๏ญ5) ๏ซ 1(๏ญ4) ๏ญ2 ๏1 ๏ซ 1(๏ญ3) ๏ป ๏ซ ๏ญ13
The product EB is not defined because the number of columns of E does not match the number of rows
of B.
2-1
Copyright ยฉ 2016 Pearson Education, Inc.
2-2
CHAPTER 2
โข
๏ฉ3
3. 3I 2 ๏ญ A ๏ฝ ๏ช
๏ซ0
Matrix Algebra
๏ญ1๏น ๏ฉ3 ๏ญ 4
๏ฝ
๏ญ2 ๏บ๏ป ๏ช๏ซ0 ๏ญ 5
0๏น ๏ฉ4
๏ญ
3๏บ๏ป ๏ช๏ซ 5
๏ญ1๏น ๏ฉ12
๏ฝ
๏ญ2๏บ๏ป ๏ช๏ซ15
๏ฉ4
(3I 2 ) A ๏ฝ 3( I 2 A) ๏ฝ 3 ๏ช
๏ซ5
๏ฉ3
(3I 2 ) A ๏ฝ ๏ช
๏ซ0
0๏น ๏ฉ4
3๏บ๏ป ๏ช๏ซ 5
๏ฉ 9
4. A ๏ญ 5 I 3 ๏ฝ ๏ช๏ช ๏ญ8
๏ช๏ซ ๏ญ4
๏ญ1
7
1
0 ๏ญ (๏ญ1) ๏น ๏ฉ ๏ญ1
๏ฝ
3 ๏ญ (๏ญ2) ๏บ๏ป ๏ช๏ซ ๏ญ5
๏ญ3๏น
, or
๏ญ6 ๏บ๏ป
๏ญ1๏น ๏ฉ3 ๏ 4 ๏ซ 0
๏ฝ
๏ญ2 ๏บ๏ป ๏ช๏ซ 0 ๏ซ 3 ๏ 5
3(๏ญ1) ๏ซ 0 ๏น ๏ฉ12
๏ฝ
0 ๏ซ 3(๏ญ2) ๏บ๏ป ๏ช๏ซ15
๏ญ1
2
1
3๏น ๏ฉ 5
๏ญ6๏บ๏บ ๏ญ ๏ช๏ช0
8๏บ๏ป ๏ช๏ซ0
0
5
0
0๏น ๏ฉ 4
0 ๏บ๏บ ๏ฝ ๏ช๏ช ๏ญ8
5๏บ๏ป ๏ช๏ซ ๏ญ4
๏ฉ 9
(5I 3 ) A ๏ฝ 5( I 3 A) ๏ฝ 5 A ๏ฝ 5 ๏ช๏ช ๏ญ8
๏ช๏ซ ๏ญ4
๏ญ1
7
1
3๏น ๏ฉ 45
๏ญ6๏บ๏บ ๏ฝ ๏ช๏ช ๏ญ40
8๏บ๏ป ๏ช๏ซ ๏ญ20
๏ฉ5 0 0๏น ๏ฉ 9
(5I 3 ) A ๏ฝ ๏ช๏ช0 5 0 ๏บ๏บ ๏ช๏ช ๏ญ8
๏ช๏ซ0 0 5๏บ๏ป ๏ช๏ซ ๏ญ4
15๏น
๏ฉ 45 ๏ญ5
๏ช
๏ฝ ๏ญ45 35 ๏ญ30๏บ
๏ช
๏บ
20
5
40
๏ช๏ญ
๏บ
๏ซ
๏ป
๏ฉ ๏ญ1
5. a. Ab 1 ๏ฝ ๏ช๏ช 5
๏ช๏ซ 2
AB ๏ฝ ๏ Ab1
๏ฉ ๏ญ1
b. ๏ช๏ช 5
๏ช๏ซ 2
๏ฉ ๏ญ7
Ab 2 ๏ ๏ฝ ๏ช๏ช 7
๏ช๏ซ 12
AB ๏ฝ ๏ Ab1
๏ฉ ๏ญ1
Ab 2 ๏ฝ ๏ช๏ช 5
๏ช๏ซ 2
๏ญ3๏น
๏ญ6 ๏บ๏ป
3๏น
๏ญ6 ๏บ๏บ
3๏บ๏ป
15๏น
๏ญ30 ๏บ๏บ , or
40 ๏บ๏ป
5( ๏ญ1) ๏ซ 0 ๏ซ 0
0 ๏ซ 5๏ 7 ๏ซ 0
0 ๏ซ 0 ๏ซ 5 ๏1
2๏น
๏ฉ 4๏น
๏ฉ ๏ญ2 ๏น ๏ช ๏บ
๏บ
4 ๏บ ๏ช ๏บ ๏ฝ ๏ช ๏ญ6 ๏บ
1
๏ญ3๏บ๏ป ๏ซ ๏ป ๏ช๏ซ ๏ญ7 ๏บ๏ป
4๏น
๏ญ6๏บ๏บ
๏ญ7 ๏บ๏ป
๏ฉ ๏ญ1 ๏ 3 ๏ซ 2(๏ญ2)
๏ญ2 ๏น ๏ช
๏ฝ 5 ๏ 3 ๏ซ 4(๏ญ2)
1๏บ๏ป ๏ช
๏ช๏ซ 2 ๏ 3 ๏ญ 3(๏ญ2)
๏ญ2 ๏น
๏ฉ 0๏น
๏ฉ 1๏น ๏ช ๏บ
๏บ
0 ๏บ ๏ช ๏บ ๏ฝ ๏ช ๏ญ3๏บ ,
2
5๏บ๏ป ๏ซ ๏ป ๏ช๏ซ 13๏บ๏ป
๏ฉ 0
Ab 2 ๏ ๏ฝ ๏ช๏ช ๏ญ3
๏ช๏ซ 13
๏ญ5
35
5
3๏น ๏ฉ 5 ๏ 9 ๏ซ 0 ๏ซ 0
๏ญ6 ๏บ๏บ ๏ฝ ๏ช 0 ๏ซ 5( ๏ญ8) ๏ซ 0
๏ช
8๏บ๏ป ๏ช๏ซ0 ๏ซ 0 ๏ซ 5( ๏ญ4)
2๏น
๏ฉ ๏ญ7 ๏น
๏ฉ 3๏น ๏ช ๏บ
๏บ
4๏บ ๏ช ๏บ ๏ฝ ๏ช 7 ๏บ ,
๏ญ2
๏ญ3๏บ๏ป ๏ซ ๏ป ๏ช๏ซ 12 ๏บ๏ป
2๏น
๏ฉ 3
4 ๏บ๏บ ๏ช
๏ญ2
๏ญ3๏บ๏ป ๏ซ
๏ฉ 4
6. a. Ab1 ๏ฝ ๏ช๏ช ๏ญ3
๏ช๏ซ 3
๏ญ1
7
1
1๏น
5๏บ๏ป
๏ญ1(๏ญ2) ๏ซ 2 ๏1๏น ๏ฉ ๏ญ7
5(๏ญ2) ๏ซ 4 ๏1๏บ๏บ ๏ฝ ๏ช๏ช 7
2(๏ญ2) ๏ญ 3 ๏ 1๏บ๏ป ๏ช๏ซ 12
๏ฉ 4
Ab 2 ๏ฝ ๏ช๏ช ๏ญ3
๏ช๏ซ 3
4๏น
๏ญ6 ๏บ๏บ
๏ญ7 ๏บ๏ป
๏ญ2 ๏น
๏ฉ 14 ๏น
๏ฉ 3๏น ๏ช ๏บ
๏บ
0 ๏บ ๏ช ๏บ ๏ฝ ๏ช ๏ญ9 ๏บ
๏ญ1
5๏บ๏ป ๏ซ ๏ป ๏ช๏ซ 4 ๏บ๏ป
14๏น
๏ญ9๏บ๏บ
4๏บ๏ป
Copyright ยฉ 2016 Pearson Education, Inc.
5 ๏ 3 ๏ซ 0 ๏ซ 0๏น
0 ๏ซ 5( ๏ญ6) ๏ซ 0๏บ
๏บ
0 ๏ซ 0 ๏ซ 5 ๏ 8๏บ๏ป
2.1
๏ฉ 4
b. ๏ช๏ช ๏ญ3
๏ช๏ซ 3
๏ญ2๏น
๏ฉ1
0๏บ๏บ ๏ช
2
5๏บ๏ป ๏ซ
๏ฉ 4 ๏1 ๏ญ 2 ๏ 2
3๏น ๏ช
๏ฝ ๏ญ3 ๏ 1 ๏ซ 0 ๏ 2
๏ญ1๏บ๏ป ๏ช
๏ช๏ซ 3 ๏ 1 ๏ซ 5 ๏ 2
4 ๏ 3 ๏ญ 2(๏ญ1) ๏น ๏ฉ 0
๏ญ3 ๏ 3 ๏ซ 0(๏ญ1) ๏บ๏บ ๏ฝ ๏ช๏ช ๏ญ3
3 ๏ 3 ๏ซ 5(๏ญ1) ๏บ๏ป ๏ช๏ซ 13
โข
Solutions
2-3
14 ๏น
๏ญ9 ๏บ๏บ
4 ๏บ๏ป
7. Since A has 3 columns, B must match with 3 rows. Otherwise, AB is undefined. Since AB has 7 columns,
so does B. Thus, B is 3ร7.
8. The number of rows of B matches the number of rows of BC, so B has 3 rows.
๏ฉ 2
9. AB ๏ฝ ๏ช
๏ซ ๏ญ3
5๏น ๏ฉ 4
1๏บ๏ป ๏ช๏ซ 3
๏ญ5๏น ๏ฉ 23
๏ฝ
k ๏บ๏ป ๏ช๏ซ ๏ญ9
๏ญ10 ๏ซ 5k ๏น
๏ฉ4
, while BA ๏ฝ ๏ช
๏บ
15 ๏ซ k ๏ป
๏ซ3
๏ญ5๏น ๏ฉ 2
k ๏บ๏ป ๏ช๏ซ ๏ญ3
5๏น ๏ฉ 23
๏ฝ
1๏บ๏ป ๏ช๏ซ6 ๏ญ 3k
15 ๏น
.
15 ๏ซ k ๏บ๏ป
Then AB = BA if and only if โ10 + 5k = 15 and โ9 = 6 โ 3k, which happens if and only if k = 5.
๏ฉ 2
10. AB ๏ฝ ๏ช
๏ซ ๏ญ4
๏ญ3๏น ๏ฉ8
6๏บ๏ป ๏ช๏ซ5
๏ฉ1
11. AD ๏ฝ ๏ช๏ช1
๏ช๏ซ1
1
2
4
1๏น ๏ฉ 2
3๏บ๏บ ๏ช๏ช 0
5๏บ๏ป ๏ช๏ซ 0
0
3
0
0๏น ๏ฉ2
0 ๏บ๏บ ๏ฝ ๏ช๏ช 2
5๏บ๏ป ๏ช๏ซ 2
3
6
12
5๏น
15๏บ๏บ
25๏บ๏ป
๏ฉ2
DA ๏ฝ ๏ช๏ช 0
๏ช๏ซ 0
0
3
0
0๏น ๏ฉ1
0๏บ๏บ ๏ช๏ช1
5๏บ๏ป ๏ช๏ซ1
1
2
4
1๏น ๏ฉ 2
3๏บ๏บ ๏ฝ ๏ช๏ช 3
5๏บ๏ป ๏ช๏ซ 5
2
6
20
2๏น
9 ๏บ๏บ
25๏บ๏ป
4๏น ๏ฉ 1
๏ฝ
5๏บ๏ป ๏ช๏ซ ๏ญ2
๏ญ7 ๏น
๏ฉ 2
, AC ๏ฝ ๏ช
๏บ
14 ๏ป
๏ซ ๏ญ4
๏ญ3๏น ๏ฉ5
6 ๏บ๏ป ๏ช๏ซ3
๏ญ2 ๏น ๏ฉ 1
๏ฝ
1๏บ๏ป ๏ช๏ซ ๏ญ2
๏ญ7 ๏น
14 ๏บ๏ป
Right-multiplication (that is, multiplication on the right) by the diagonal matrix D multiplies each column
of A by the corresponding diagonal entry of D. Left-multiplication by D multiplies each row of A by the
corresponding diagonal entry of D. To make AB = BA, one can take B to be a multiple of I3. For instance,
if B = 4I3, then AB and BA are both the same as 4A.
12. Consider B = [b1 b2]. To make AB = 0, one needs Ab1 = 0 and Ab2 = 0. By inspection of A, a suitable
๏ฉ 2๏น
๏ฉ2๏น
๏ฉ2 6๏น
b1 is ๏ช ๏บ , or any multiple of ๏ช ๏บ . Example: B ๏ฝ ๏ช
๏บ.
๏ซ 1๏ป
๏ซ 1๏ป
๏ซ 1 3๏ป
13. Use the definition of AB written in reverse order: [Ab1 ๏ ๏ ๏ Abp] = A[b1 ๏ ๏ ๏ bp]. Thus
[Qr1 ๏ ๏ ๏ Qrp] = QR, when R = [r1 ๏ ๏ ๏ rp].
14. By definition, UQ = U[q1 ๏ ๏ ๏ q4] = [Uq1 ๏ ๏ ๏ Uq4]. From Example 6 of Section 1.8, the vector
Uq1 lists the total costs (material, labor, and overhead) corresponding to the amounts of products B and
C specified in the vector q1. That is, the first column of UQ lists the total costs for materials, labor, and
overhead used to manufacture products B and C during the first quarter of the year. Columns 2, 3,
and 4 of UQ list the total amounts spent to manufacture B and C during the 2nd, 3rd, and 4th quarters,
respectively.
15. a. False. See the definition of AB.
b. False. The roles of A and B should be reversed in the second half of the statement. See the box after
Example 3.
c. True. See Theorem 2(b), read right to left.
d. True. See Theorem 3(b), read right to left.
e. False. The phrase โin the same orderโ should be โin the reverse order.โ See the box after Theorem 3.
Copyright ยฉ 2016 Pearson Education, Inc.
2-4
CHAPTER 2
โข
Matrix Algebra
16. a. False. AB must be a 3ร3 matrix, but the formula for AB implies that it is 3ร1. The plus signs should
be just spaces (between columns). This is a common mistake.
b. True. See the box after Example 6.
c. False. The left-to-right order of B and C cannot be changed, in general.
d. False. See Theorem 3(d).
e. True. This general statement follows from Theorem 3(b).
2 ๏ญ1๏น
๏ฉ ๏ญ1
๏ฝ AB ๏ฝ ๏ Ab1 Ab 2 Ab3 ๏ , the first column of B satisfies the equation
17. Since ๏ช
3๏บ๏ป
๏ซ 6 ๏ญ9
๏ฉ7 ๏น
๏ฉ 1 ๏ญ2 ๏ญ1๏น ๏ฉ 1 0 7 ๏น
๏ฉ ๏ญ1๏น
Ax ๏ฝ ๏ช ๏บ . Row reduction: ๏ A Ab1 ๏ ~ ๏ช
~๏ช
. So b1 = ๏ช ๏บ . Similarly,
๏บ
๏บ
5
6 ๏ป ๏ซ0 1 4 ๏ป
๏ซ4๏ป
๏ซ ๏ญ2
๏ซ 6๏ป
๏A
๏ฉ 1
Ab 2 ๏ ~ ๏ช
๏ซ ๏ญ2
๏ญ2
5
2๏น ๏ฉ 1
~
๏ญ9๏บ๏ป ๏ช๏ซ0
0
1
๏ญ 8๏น
๏ฉ ๏ญ8๏น
and b2 = ๏ช ๏บ .
๏บ
๏ญ5๏ป
๏ซ ๏ญ5๏ป
Note: An alternative solution of Exercise 17 is to row reduce [A Ab1 Ab2] with one sequence of row
operations. This observation can prepare the way for the inversion algorithm in Section 2.2.
18. The first two columns of AB are Ab1 and Ab2. They are equal since b1 and b2 are equal.
19. (A solution is in the text). Write B = [b1 b2 b3]. By definition, the third column of AB is Ab3. By
hypothesis, b3 = b1 + b2. So Ab3 = A(b1 + b2) = Ab1 + Ab2, by a property of matrix-vector multiplication.
Thus, the third column of AB is the sum of the first two columns of AB.
20. The second column of AB is also all zeros because Ab2 = A0 = 0.
21. Let bp be the last column of B. By hypothesis, the last column of AB is zero. Thus, Abp = 0. However,
bp is not the zero vector, because B has no column of zeros. Thus, the equation Abp = 0 is a linear
dependence relation among the columns of A, and so the columns of A are linearly dependent.
Note: The text answer for Exercise 21 is, โThe columns of A are linearly dependent. Why?โ The Study Guide
supplies the argument above in case a student needs help.
22. If the columns of B are linearly dependent, then there exists a nonzero vector x such that Bx = 0. From
this, A(Bx) = A0 and (AB)x = 0 (by associativity). Since x is nonzero, the columns of AB must be linearly
dependent.
23. If x satisfies Ax = 0, then CAx = C0 = 0 and so Inx = 0 and x = 0. This shows that the equation Ax = 0
has no free variables. So every variable is a basic variable and every column of A is a pivot column.
(A variation of this argument could be made using linear independence and Exercise 30 in Section 1.7.)
Since each pivot is in a different row, A must have at least as many rows as columns.
24. Take any b in ๏ . By hypothesis, ADb = Imb = b. Rewrite this equation as A(Db) = b. Thus, the
m
vector x = Db satisfies Ax = b. This proves that the equation Ax = b has a solution for each b in ๏ .
By Theorem 4 in Section 1.4, A has a pivot position in each row. Since each pivot is in a different
column, A must have at least as many columns as rows.
m
Copyright ยฉ 2016 Pearson Education, Inc.
2.1
โข
Solutions
2-5
25. By Exercise 23, the equation CA = In implies that (number of rows in A) > (number of columns), that is,
m > n. By Exercise 24, the equation AD = Im implies that (number of rows in A) < (number of columns),
that is, m 1.
Note: Exercise 23 is good for mathematics and computer science students. The solution of Exercise 23 in the
Study Guide shows students how to use the principle of induction. The Study Guide also has an appendix on
โThe Principle of Induction,โ at the end of Section 2.4. The text presents more applications of induction in
Section 3.2 and in the Supplementary Exercises for Chapter 3.
๏ฉ1
๏ช
๏ช1
24. Let An ๏ฝ ๏ช1
๏ช
๏ช๏
๏ช๏ซ1
๏
0
0
1
1
0
1
1
๏
1 ๏
0๏น
๏ฉ 1
๏บ
๏ช
0๏บ
๏ช ๏ญ1
0 ๏บ , Bn ๏ฝ ๏ช 0
๏บ
๏ช
๏บ
๏ช ๏
๏ช๏ซ 0
1๏บ๏ป
0
0
1
๏ญ1
0
1
๏
๏
๏
๏
๏ญ1
0๏น
0 ๏บ๏บ
0๏บ .
๏บ
๏บ
1๏บ๏ป
By direct computation A2B2 = I2. Assume that for n = k, the matrix AkBk is Ik, and write
๏ฉ1
Ak ๏ซ1 ๏ฝ ๏ช
๏ซv
๏ฉ1
0T ๏น
๏บ and Bk ๏ซ1 ๏ฝ ๏ช
Ak ๏ป
๏ซw
0T ๏น
๏บ
Bk ๏ป
where v and w are in ๏ , vT = [1 1 ๏ ๏ ๏ 1], and wT = [โ1 0 ๏ ๏ ๏ 0]. Then
k
๏ฉ1
Ak ๏ซ1 Bk ๏ซ1 ๏ฝ ๏ช
๏ซv
0T ๏น ๏ฉ 1
๏บ๏ช
Ak ๏ป ๏ซ w
T
0T ๏น ๏ฉ 1 ๏ซ 0 w
๏บ๏ฝ๏ช
Bk ๏ป ๏ช๏ซ v ๏ซ Ak w
0T ๏ซ 0T Bk ๏น ๏ฉ1
๏บ๏ฝ๏ช
v0T ๏ซ Ak Bk ๏บ๏ป ๏ซ 0
0T ๏น
๏บ ๏ฝ I k ๏ซ1
Ik ๏ป
The (2,1)-entry is 0 because v equals the first column of Ak., and Akw is โ1 times the first column of Ak.
By the principle of induction, AnBn = In for all n > 2. Since An and Bn are square, the IMT shows that
these matrices are invertible, and Bn ๏ฝ An๏ญ1.
Copyright ยฉ 2016 Pearson Education, Inc.
2-28
CHAPTER 2
โข
Matrix Algebra
Note: An induction proof can also be given using partitions with the form shown below. The details are
slightly more complicated.
๏ฉ Ak
Ak ๏ซ1 ๏ฝ ๏ช T
๏ซv
0๏น
๏ฉ Bk
๏บ and Bk ๏ซ1 ๏ฝ ๏ช T
1๏ป
๏ซw
๏ฉ Ak
Ak ๏ซ1 Bk ๏ซ1 ๏ฝ ๏ช T
๏ซv
0 ๏น ๏ฉ Bk
๏บ๏ช
1 ๏ป ๏ซ wT
0๏น
๏บ
1๏ป
0 ๏น ๏ฉ Ak Bk ๏ซ 0w T
๏บ๏ฝ๏ช
1 ๏ป ๏ซ๏ช vT Bk ๏ซ wT
Ak 0 ๏ซ 0 ๏น ๏ฉ I k
๏บ๏ฝ๏ช T
vT 0 ๏ซ 1 ๏ป๏บ ๏ซ0
0๏น
๏บ ๏ฝ I k ๏ซ1
1๏ป
The (2,1)-entry is 0T because vT times a column of Bk equals the sum of the entries in the column, and all
of such sums are zero except the last, which is 1. So vTBk is the negative of wT. By the principle of
induction, AnBn = In for all n > 2. Since An and Bn are square, the IMT shows that these matrices are
invertible, and Bn ๏ฝ An๏ญ1.
25. First, visualize a partition of A as a 2ร2 blockโdiagonal matrix, as below, and then visualize the
(2,2)-block itself as a block-diagonal matrix. That is,
๏ฉ1
๏ช3
๏ช
A ๏ฝ ๏ช0
๏ช
๏ช0
๏ช๏ซ 0
2
0
0
5
0
0
2
0
0
0
0
0
0
7
5
0๏น
0 ๏บ๏บ
๏ฉ A11
0๏บ ๏ฝ ๏ช
๏บ ๏ช 0
8๏บ ๏ซ
6 ๏บ๏ป
๏ฉ2
0๏น
, where A22 ๏ฝ ๏ช๏ช 0
A22 ๏บ๏ป๏บ
๏ช๏ซ 0
๏ฉ 3
Observe that B is invertible and Bโ1 = ๏ช
๏ซ ๏ญ2.5
0
๏ฉ.5
๏น ๏ฉ.5
๏ช
๏ญ1
๏ญ4 ๏บ๏บ ๏ฝ ๏ช 0
3
invertible, and A22 ๏ฝ ๏ช
๏ช
0
๏ช
๏บ ๏ช๏ซ 0
2.5
3.5
๏ญ
๏ซ
๏ป
๏ฉ A๏ญ1
A ๏ฝ ๏ช 11
๏ช๏ซ 0
๏ญ1
0
.5
0
0
0
3
๏ญ2.5
0๏น
B ๏บ๏ป
๏ญ4 ๏น
. By Exercise 13, the block diagonal matrix A22 is
3.5๏บ๏ป
0
3
๏ญ2.5
0๏น
๏ญ4 ๏บ๏บ
3.5๏บ๏ป
๏ฉ ๏ญ5
Next, observe that A11 is also invertible, with inverse ๏ช
๏ซ 3
and its inverse is block diagonal:
๏ฉ ๏ญ5 2
๏ช 3 ๏ญ1
0 ๏น ๏ช
๏บ๏ฝ๏ช
๏ญ1
A22
๏บ๏ป ๏ช 0
๏ช
๏ช
๏ซ
0๏น
๏ฉ2
8๏บ๏บ ๏ฝ ๏ช
0
6 ๏บ๏ป ๏ซ
0
7
5
๏น ๏ฉ ๏ญ5
๏บ ๏ช
๏บ ๏ช 3
0 ๏บ๏ฝ๏ช 0
๏บ
๏ญ4 ๏บ ๏ช๏ช 0
3.5๏บ๏ป ๏ช๏ซ 0
2๏น
. By Exercise 13, A itself is invertible,
๏ญ1๏บ๏ป
2
0
0
๏ญ1
0
0
0
.5
0
0
0
3
0
0
๏ญ2.5
0๏น
0 ๏บ๏บ
0๏บ
๏บ
๏ญ4 ๏บ
3.5๏บ๏ป
26. [M] This exercise and the next, which involve large matrices, are more appropriate for MATLAB,
Maple, and Mathematica, than for the graphic calculators.
a. Display the submatrix of A obtained from rows 15 to 20 and columns 5 to 10.
MATLAB:
A(15:20, 5:10)
Maple:
submatrix(A, 15..20, 5..10)
Mathematica:
Take[ A, {15,20}, {5,10} ]
Copyright ยฉ 2016 Pearson Education, Inc.
2.5
โข
Solutions
2-29
b. Insert a 5ร10 matrix B into rows 10 to 14 and columns 20 to 29 of matrix A:
MATLAB:
A(10:14, 20:29) = B ;
The semicolon suppresses output display.
Maple:
copyinto(B, A, 10, 20):
Mathematica:
For [ i=10, i<=14, i++,
For [ j=20, j p. For j = 1, โฆ, q, the vector aj is
in W. Since the columns of B span W, the vector aj is in the column space of B. That is, aj = Bcj for
p
some vector cj of weights. Note that cj is in ๏ because B has p columns.
b. Let C = [c1 ๏ ๏ ๏ cq]. Then C is a pรq matrix because each of the q columns is in ๏ .
By hypothesis, q is larger than p, so C has more columns than rows. By a theorem, the columns of C
q
are linearly dependent and there exists a nonzero vector u in ๏ such that Cu = 0.
c. From part (a) and the definition of matrix multiplication A = [a1 ๏ ๏ ๏ aq] = [Bc1 ๏ ๏ ๏ Bcq] = BC.
From part (b), Au = (BC ) u = B(Cu) = B0 = 0. Since u is nonzero, the columns of A are linearly
dependent.
p
Copyright ยฉ 2016 Pearson Education, Inc.
Chapter 2
โข
Supplementary Exercises
2-71
28. If ๏ contained more vectors than ๏ , then ๏ would be linearly dependent, by Exercise 27, because ๏
spans W. Repeat the argument with ๏ and ๏ interchanged to conclude that ๏ cannot contain more
vectors than ๏ .
29. [M] Apply the matrix command ref or rref to the matrix [v1 v2 x]:
19 ๏น ๏ฉ 1 0 ๏ญ1.667 ๏น
๏ฉ 11 14
๏ช ๏ญ5 ๏ญ8 ๏ญ13๏บ ๏ช 0 1
2.667 ๏บ๏บ
๏ช
๏บ~๏ช
๏ช 10 13
18๏บ ๏ช 0 0
0 ๏บ
๏ช
๏บ ๏ช
๏บ
15๏บ๏ป ๏ช๏ซ 0 0
0 ๏บ๏ป
๏ช๏ซ 7 10
The equation c1v1 + c2v2 = x is consistent, so x is in the subspace H. The decimal approximations suggest
c1 = โ5/3 and c2 = 8/3, and it can be checked that these values are precise. Thus, the ๏ -coordinate of x is
(โ5/3, 8/3).
30. [M] Apply the matrix command ref or rref to the matrix [v1 v2 v3 x]:
8 ๏ญ9
4 ๏น ๏ฉ 1 0 0 3๏น
๏ฉ ๏ญ6
๏ช 4 ๏ญ3
5
7 ๏บ๏บ ๏ช๏ช 0 1 0 5๏บ๏บ
๏ช
~
๏ช ๏ญ9
7 ๏ญ8 ๏ญ8๏บ ๏ช 0 0 1 2 ๏บ
๏ช
๏บ ๏ช
๏บ
3
3๏บ๏ป ๏ช๏ซ 0 0 0 0 ๏บ๏ป
๏ช๏ซ 4 ๏ญ3
The first three columns of [v1 v2 v3 x] are pivot columns, so v1, v2 and v3 are linearly independent.
Thus v1, v2 and v3 form a basis ๏ for the subspace H which they span. View [v1 v2 v3 x] as an
augmented matrix for c1v1 + c2v2 + c3v3 = x. The reduced echelon form shows that x is in H and
๏ฉ 3๏น
๏ x ๏๏ ๏ฝ ๏ช๏ช 5๏บ๏บ .
๏ช๏ซ 2 ๏บ๏ป
Notes: The Study Guide for Section 2.9 contains a complete list of the statements in the Invertible Matrix
Theorem that have been given so far. The format is the same as that used in Section 2.3, with three columns:
statements that are logically equivalent for any mรn matrix and are related to existence concepts, those that
are equivalent only for any nรn matrix, and those that are equivalent for any nรp matrix and are related to
uniqueness concepts. Four statements are included that are not in the textโs official list of statements, to give
more symmetry to the three columns.
The Study Guide section also contains directions for making a review sheet for โdimensionโ and โrank.โ
Chapter 2
SUPPLEMENTARY EXERCISES
1. a. True. If A and B are mรn matrices, then BT has as many rows as A has columns, so ABT is defined.
Also, ATB is defined because AT has m columns and B has m rows.
b. False. B must have 2 columns. A has as many columns as B has rows.
c. True. The ith row of A has the form (0, โฆ, di, โฆ, 0). So the ith row of AB is (0, โฆ, di, โฆ, 0)B, which
is di times the ith row of B.
d. False. Take the zero matrix for B. Or, construct a matrix B such that the equation Bx = 0 has
nontrivial solutions, and construct C and D so that C ๏น D and the columns of C โ D satisfy the
equation Bx = 0. Then B(C โ D) = 0 and BC = BD.
Copyright ยฉ 2016 Pearson Education, Inc.
2-72
CHAPTER 2
โข
Matrix Algebra
๏ฉ0 0 ๏น
๏ฉ 1 0๏น
e. False. Counterexample: A = ๏ช
and C = ๏ช
๏บ
๏บ.
๏ซ0 1๏ป
๏ซ0 0 ๏ป
f. False. (A + B)(A โ B) = A2 โ AB + BA โ B2. This equals A2 โ B2 if and only if A commutes with B.
g. True. An nรn replacement matrix has n + 1 nonzero entries. The nรn scale and interchange matrices
have n nonzero entries.
h. True. The transpose of an elementary matrix is an elementary matrix of the same type.
i. True. An nรn elementary matrix is obtained by a row operation on In.
j. False. Elementary matrices are invertible, so a product of such matrices is invertible. But not every
square matrix is invertible.
k. True. If A is 3ร3 with three pivot positions, then A is row equivalent to I3.
l. False. A must be square in order to conclude from the equation AB = I that A is invertible.
m. False. AB is invertible, but (AB)โ1 = Bโ1Aโ1, and this product is not always equal to Aโ1Bโ1.
n. True. Given AB = BA, left-multiply by Aโ1 to get B = Aโ1BA, and then right-multiply by Aโ1 to obtain
BAโ1 = Aโ1B.
o. False. The correct equation is (rA)โ1 = rโ1Aโ1, because (rA)(rโ1Aโ1) = (rrโ1)(AAโ1) = 1โ
I = I.
๏ฉ 1๏น
p. True. If the equation Ax = ๏ช0 ๏บ has a unique solution, then there are no free variables in this equation,
๏ช ๏บ
๏ช๏ซ0 ๏บ๏ป
which means that A must have three pivot positions (since A is 3ร3). By the Invertible Matrix
Theorem, A is invertible.
2. C = (C โ 1 ) โ1 =
๏ฉ0
3. A ๏ฝ ๏ช 1
๏ช
๏ช๏ซ 0
0
0
1
1 ๏ฉ 7
๏ญ2 ๏ซ๏ช ๏ญ6
0๏น
0๏บ ,
๏บ
0 ๏บ๏ป
๏ฉ0
A ๏ฝ A ๏ A ๏ฝ ๏ช1
๏ช
๏ซ๏ช0
3
2
๏ญ5๏น ๏ฉ ๏ญ7 / 2
๏ฝ
4๏ป๏บ ๏ซ๏ช 3
5 / 2๏น
๏ญ2 ๏ป๏บ
๏ฉ0 0 0 ๏น ๏ฉ0 0 0 ๏น ๏ฉ0
A ๏ฝ ๏ช1 0 0 ๏บ ๏ช1 0 0 ๏บ ๏ฝ ๏ช 0
๏ช
๏บ๏ช
๏บ ๏ช
๏ช๏ซ0 1 0 ๏บ๏ป ๏ช๏ซ 0 1 0 ๏บ๏ป ๏ช๏ซ1
0 0๏น ๏ฉ0 0 0 ๏น ๏ฉ0 0 0๏น
0 0๏บ ๏ช0 0 0 ๏บ ๏ฝ ๏ช0 0 0๏บ
๏บ๏ช
๏บ ๏ช
๏บ
1 0 ๏ป๏บ ๏ซ๏ช1 0 0 ๏บ๏ป ๏ช๏ซ 0 0 0 ๏ป๏บ
2
0
0
0
0๏น
0๏บ
๏บ
0 ๏บ๏ป
Next, ( I ๏ญ A)( I ๏ซ A ๏ซ A2 ) ๏ฝ I ๏ซ A ๏ซ A2 ๏ญ A( I ๏ซ A ๏ซ A2 ) ๏ฝ I ๏ซ A ๏ซ A2 ๏ญ A ๏ญ A2 ๏ญ A3 ๏ฝ I ๏ญ A3 .
Since A3 = 0, ( I ๏ญ A)( I ๏ซ A ๏ซ A2 ) ๏ฝ I .
4. From Exercise 3, the inverse of I โ A is probably I + A + A2 + โ
โ
โ
+ Anโ1. To verify this, compute
( I ๏ญ A)( I ๏ซ A ๏ซ ๏ ๏ซ An ๏ญ1 ) ๏ฝ I ๏ซ A ๏ซ ๏ ๏ซ An ๏ญ1 ๏ญ A( I ๏ซ A ๏ซ ๏ ๏ซ An ๏ญ1 ) ๏ฝ I ๏ญ AAn ๏ญ1 ๏ฝ I ๏ญ An
If An = 0, then the matrix B = I + A + A2 + โ
โ
โ
+ Anโ1 satisfies (I โ A)B = I. Since I โ A and B are square,
they are invertible by the Invertible Matrix Theorem, and B is the inverse of I โ A.
5. A2 = 2A โ I. Multiply by A: A3 = 2A2 โ A. Substitute A2 = 2A โ I: A3 = 2(2A โ I ) โ A = 3A โ 2I.
Multiply by A again: A4 = A(3A โ 2I) = 3A2 โ 2A. Substitute the identity A2 = 2A โ I again.
Finally, A4 = 3(2A โ I) โ 2A = 4A โ 3I.
Copyright ยฉ 2016 Pearson Education, Inc.
Chapter 2
๏ฉ1
6. Let A ๏ฝ ๏ช
๏ซ0
0๏น
๏ฉ0
and B ๏ฝ ๏ช
๏บ
๏ญ1๏ป
๏ซ1
โข
Supplementary Exercises
2-73
1๏น
๏ฉ 0 1๏น
. By direct computation, A2 = I, B2 = I, and AB = ๏ช
๏บ = โ BA.
๏บ
0๏ป
๏ซ๏ญ1 0๏ป
7. (Partial answer in Study Guide) Since Aโ1B is the solution of AX = B, row reduction of [A B] to [I X]
will produce X = Aโ1B. See Exercise 12 in Section 2.2.
๏A
๏ฉ1
~ ๏ช๏ช0
๏ซ๏ช0
๏ฉ1
B ๏ ๏ฝ ๏ช๏ช 2
๏ช๏ซ 1
3 8
1 3
๏ญ3
๏ญ6
0
๏ญ5
1
๏ญ3
1
3
5๏น ๏ฉ 1
5๏บ๏บ ~ ๏ช๏ช0
4๏บ๏ป ๏ช๏ซ0
3
๏ญ2
๏ญ1
8
๏ญ5
๏ญ3
5๏น ๏ฉ 1
1๏บ๏บ ~ ๏ช๏ช0
๏ญ3๏ป๏บ ๏ซ๏ช0
3
1
0
0
37
9
0
1
๏ญ5
29 ๏น ๏ฉ 1
10 ๏บ๏บ ~ ๏ช๏ช0
๏ญ3๏ป๏บ ๏ซ๏ช0
3 8
4 11
2 5
๏ญ3
7
6
5๏น ๏ฉ 1
๏ญ5๏บ๏บ ~ ๏ช๏ช0
๏ญ1๏บ๏ป ๏ช๏ซ0
3
1
๏ญ2
0
1
0
0
10
9
0
1
๏ญ5
๏ฉ1
8. By definition of matrix multiplication, the matrix A satisfies A ๏ช
๏ซ3
๏ฉ1
Right-multiply both sides by the inverse of ๏ช
๏ซ3
1๏น
๏ฉ1 3๏น ๏ฉ 7 ๏ญ2 ๏น ๏ฉ ๏ญ2
A๏ฝ ๏ช
๏ฝ๏ช
.
๏บ
๏ช
๏บ
1๏ป ๏ซ 4 ๏ญ1๏บ๏ป
๏ซ1 1๏ป ๏ซ ๏ญ3
8
3
๏ญ5
๏ญ3
๏ญ6
7
5๏น
1๏บ๏บ
๏ญ5๏บ๏ป
๏ญ1๏น
๏ฉ 10
๏บ
โ1
10๏บ . Thus, A B = ๏ช๏ช 9
๏ญ3๏ป๏บ
๏ซ๏ช ๏ญ5
๏ญ1๏น
10 ๏บ๏บ .
๏ญ3๏ป๏บ
2๏น ๏ฉ1 3๏น
๏ฝ
.
7 ๏บ๏ป ๏ช๏ซ1 1๏บ๏ป
2๏น
. The left side becomes A. Thus,
7 ๏บ๏ป
๏ฉ 5 4๏น
๏ฉ 7 3๏น
โ1
and B ๏ฝ ๏ช
9. Given AB ๏ฝ ๏ช
๏บ
๏บ , notice that ABB = A. Since det B = 7 โ 6 =1,
2
3
2
1
๏ญ
๏ซ
๏ป
๏ซ
๏ป
๏ฉ 1 ๏ญ3๏น
๏ฉ 5 4 ๏น ๏ฉ 1 ๏ญ3๏น ๏ฉ ๏ญ3 13๏น
B ๏ญ1 ๏ฝ ๏ช
and A ๏ฝ ( AB) B ๏ญ1 ๏ฝ ๏ช
๏ฝ
๏บ
๏บ๏ช
7๏ป
7 ๏บ๏ป ๏ช๏ซ ๏ญ8 27 ๏บ๏ป
๏ซ ๏ญ2
๏ซ ๏ญ2 3๏ป ๏ซ ๏ญ2
Note: Variants of this question make simple exam questions.
10. Since A is invertible, so is AT, by the Invertible Matrix Theorem. Then ATA is the product of invertible
matrices and so is invertible. Thus, the formula (ATA)โ1AT makes sense. By Theorem 6 in Section 2.2,
(ATA)โ1โ
AT = Aโ1(AT)โ1AT = Aโ1I = Aโ1
An alternative calculation: (ATA)โ1ATโ
A = (ATA)โ1(ATA) = I. Since A is invertible, this equation shows that
its inverse is (ATA)โ1AT.
๏ฉ c0 ๏น
๏ช
๏บ
n ๏ญ1
11. a. For i = 1,โฆ, n, p(xi) = c0 + c1xi + โ
โ
โ
+ cn ๏ญ1 x i = row i (V ) ๏ ๏ช ๏ ๏บ ๏ฝ row i (V )c .
๏ช๏ซcn ๏ญ1 ๏บ๏ป
By a property of matrix multiplication, shown after Example 6 in Section 2.1, and the fact that c was
chosen to satisfy Vc= y, row i (V )c ๏ฝ row i (Vc) ๏ฝ row i ( y ) ๏ฝ yi
Thus, p(xi) = yi. To summarize, the entries in Vc are the values of the polynomial p(x) at x1, โฆ, xn.
b. Suppose x1, โฆ, xn are distinct, and suppose Vc = 0 for some vector c. Then the entries in c are the
coefficients of a polynomial whose value is zero at the distinct points x1, …, xn. However, a nonzero
polynomial of degree n โ 1 cannot have n zeros, so the polynomial must be identically zero. That is,
the entries in c must all be zero. This shows that the columns of V are linearly independent.
Copyright ยฉ 2016 Pearson Education, Inc.
2-74
CHAPTER 2
โข
Matrix Algebra
c. (Solution in Study Guide) When x1, โฆ, xn are distinct, the columns of V are linearly independent, by
n
(b). By the Invertible Matrix Theorem, V is invertible and its columns span ๏ . So, for every
y = (y1, โฆ, yn) in ๏ , there is a vector c such that Vc = y. Let p be the polynomial whose coefficients
are listed in c. Then, by (a), p is an interpolating polynomial for (x1, y1), โฆ, (xn, yn).
n
12. If A = LU, then col1(A) = Lโ
col1(U). Since col1(U) has a zero in every entry except possibly the first,
Lโ
col1(U) is a linear combination of the columns of L in which all weights except possibly the first are
zero. So col1(A) is a multiple of col1(L).
Similarly, col2(A) = Lโ
col2(U), which is a linear combination of the columns of L using the first two
entries in col2(U) as weights, because the other entries in col2(U) are zero. Thus col2(A) is a linear
combination of the first two columns of L.
13. a. P2 = (uuT)(uuT) = u(uTu)uT = u(1)uT = P, because u satisfies uTu = 1.
b. PT = (uuT)T = uTTuT = uuT = P
c. Q2 = (I โ 2P)(I โ 2P) = I โ I(2P) โ 2PI + 2P(2P)
= I โ 4P + 4P2 = I, because of part (a).
๏ฉ0๏น
14. Given u ๏ฝ ๏ช 0 ๏บ , define P and Q as in Exercise 13 by
๏ช ๏บ
๏ช๏ซ1 ๏บ๏ป
๏ฉ0 ๏น
๏ฉ0 0 0๏น
๏ฉ1
๏ช
๏บ
๏ช
๏บ
T
P ๏ฝ uu ๏ฝ 0 ๏ 0 0 1๏ ๏ฝ 0 0 0 , Q ๏ฝ I ๏ญ 2 P ๏ฝ ๏ช 0
๏ช ๏บ
๏ช
๏บ
๏ช
๏ช๏ซ1 ๏บ๏ป
๏ช๏ซ 0 0 1 ๏บ๏ป
๏ช๏ซ 0
๏ฉ1 ๏น
๏ฉ0
๏ช
๏บ
If x ๏ฝ 5 , then Px ๏ฝ ๏ช 0
๏ช ๏บ
๏ช
๏ซ๏ช 0
๏ซ๏ช3๏ป๏บ
0
0
0
0 ๏น ๏ฉ1 ๏น ๏ฉ 0 ๏น
0 ๏บ ๏ช5 ๏บ ๏ฝ ๏ช 0 ๏บ
๏บ๏ช ๏บ ๏ช ๏บ
1 ๏ป๏บ ๏ซ๏ช3๏ป๏บ ๏ซ๏ช 3 ๏ป๏บ
๏ฉ1
and Qx ๏ฝ ๏ช0
๏ช
๏ซ๏ช0
0๏น
๏ฉ0
๏บ
0 ๏ญ 2 ๏ช0
๏บ
๏ช
๏ช๏ซ 0
1 ๏บ๏ป
0
1
0
0
1
0
0
0
0
0 ๏น ๏ฉ1
0 ๏บ ๏ฝ ๏ช0
๏บ ๏ช
1 ๏บ๏ป ๏ช๏ซ 0
0
1
0
0๏น
0๏บ
๏บ
๏ญ1๏บ๏ป
0 ๏น ๏ฉ 1๏น ๏ฉ 1๏น
0 ๏บ ๏ช5 ๏บ ๏ฝ ๏ช 5 ๏บ .
๏บ๏ช ๏บ ๏ช ๏บ
๏ญ1๏ป๏บ ๏ซ๏ช 3๏ป๏บ ๏ซ๏ช ๏ญ3๏ป๏บ
15. Left-multiplication by an elementary matrix produces an elementary row operation:
B ~ E1 B ~ E2 E1 B ~ E3 E2 E1 B ๏ฝ C , so B is row equivalent to C. Since row operations are reversible, C is
row equivalent to B. (Alternatively, show C being changed into B by row operations using the inverse of
the Ei .)
16. Since A is not invertible, there is a nonzero vector v in ๏ such that Av = 0. Place n copies of v into an
nรn matrix B. Then AB = A[v ๏ ๏ ๏ v] = [Av ๏ ๏ ๏ Av] = 0.
n
17. Let A be a 6ร4 matrix and B a 4ร6 matrix. Since B has more columns than rows, its six columns are
linearly dependent and there is a nonzero x such that Bx = 0. Thus ABx = A0 = 0. This shows that the
matrix AB is not invertible, by the IMT. (Basically the same argument was used to solve Exercise 22 in
Section 2.1.)
Note: (In the Study Guide) It is possible that BA is invertible. For example, let C be an invertible 4ร4 matrix
๏ฉC ๏น
and construct A ๏ฝ ๏ช ๏บ and B ๏ฝ [C ๏ญ1 0]. Then BA = I4, which is invertible.
๏ซ0๏ป
Copyright ยฉ 2016 Pearson Education, Inc.
Chapter 2
โข
Supplementary Exercises
2-75
18. By hypothesis, A is 5ร3, C is 3ร5, and AC = I3. Suppose x satisfies Ax = b. Then CAx = Cb. Since
CA = I, x must be Cb. This shows that Cb is the only solution of Ax = b.
๏ฉ.4 .2 .3๏น
๏ฉ .31
๏ช
๏บ
2
19. [M] Let A ๏ฝ .3 .6 .3 . Then A ๏ฝ ๏ช.39
๏ช
๏บ
๏ช
๏ช๏ซ.30
๏ช๏ซ.3 .2 .4 ๏บ๏ป
calculations by computing
๏ฉ.2875
A ๏ฝ A A ๏ฝ ๏ช .4251
๏ช
๏ซ๏ช.2874
4
2
.2834
2
.4332
.2834
.2874 ๏น
.4251๏บ ,
๏บ
.2875๏ป๏บ
.30 ๏น
.39 ๏บ . Instead of computing A3 next, speed up the
๏บ
.31๏บ๏ป
.26
.48
.26
๏ฉ.2857
A ๏ฝ A A ๏ฝ ๏ช.4285
๏ช
๏ซ๏ช.2857
8
4
4
๏ฉ.2857
To four decimal places, as k increases, A ๏ฎ ๏ช.4286
๏ช
๏ช๏ซ.2857
k
๏ฉ2 / 7
A ๏ฎ ๏ช3/ 7
๏ช
๏ช๏ซ 2 / 7
๏ฉ0
If B ๏ฝ ๏ช .1
๏ช
๏ซ๏ช.9
3/ 7
2/7
.3๏น
.3๏บ , then
๏บ
.4 ๏ป๏บ
.2
.6
.2
๏ฉ.2024
B ๏ฝ ๏ช.3707
๏ช
๏ซ๏ช.4269
8
.4286
.2857
.4286
.2857
.2857 ๏น
.4286 ๏บ , or, in rational format,
๏บ
.2857 ๏บ๏ป
2 / 7๏น
3 / 7๏บ .
๏บ
2 / 7 ๏บ๏ป
2/7
k
.2857
.2857 ๏น
.4285 ๏บ
๏บ
.2857 ๏ป๏บ
.2857
.2022
.3709
.4269
๏ฉ.29
B ๏ฝ ๏ช.33
๏ช
๏ซ๏ช.38
2
.18
.44
.38
.18 ๏น
๏ฉ.2119
4
๏ช
๏บ
.33 , B ๏ฝ ๏ช .3663
๏บ
.49 ๏ป๏บ
๏ซ๏ช.4218
.1998
.3764
.4218
.1998๏น
.3663๏บ ,
๏บ
.4339๏บ๏ป
.2022๏น
๏ฉ.2022
๏บ
k
.3707 . To four decimal places, as k increases, B ๏ฎ ๏ช.3708
๏บ
๏ช
๏ช๏ซ.4270
.4271๏ป๏บ
๏ฉ18 / 89
or, in rational format, B ๏ฎ ๏ช 33/ 89
๏ช
๏ช๏ซ38 / 89
k
18 / 89
33/ 89
38 / 89
.2022
.3708
.4270
.2022 ๏น
.3708 ๏บ ,
๏บ
.4270 ๏บ๏ป
18 / 89 ๏น
33/ 89 ๏บ .
๏บ
38 / 89 ๏บ๏ป
20. [M] The 4ร4 matrix A4 is the 4ร4 matrix of ones, minus the 4ร4 identity matrix. The MATLAB
command is A4 = ones(4) โ eye(4). For the inverse, use inv(A4).
1
1๏น
1๏บ๏บ
,
1๏บ
๏บ
0๏ป
๏ฉ ๏ญ2 / 3
๏ช 1/ 3
A4๏ญ1 ๏ฝ ๏ช
๏ช 1/ 3
๏ช
๏ซ 1/ 3
1
1
0
1
1
1
1
1
0
1
1๏น
1๏บ๏บ
1๏บ ,
๏บ
1๏บ
0 ๏บ๏ป
๏ฉ0
๏ช1
A4 ๏ฝ ๏ช
๏ช1
๏ช
๏ซ1
1
0
1
1
1
0
1
๏ฉ0
๏ช1
๏ช
A5 ๏ฝ ๏ช 1
๏ช
๏ช1
๏ช๏ซ 1
1
0
1
1
1
1/ 3
๏ญ2 / 3
1/ 3
1/ 3
1/ 3
๏ญ2 / 3
1/ 3
1/ 3
๏ฉ ๏ญ3/ 4
๏ช 1/ 4
๏ช
A5๏ญ1 ๏ฝ ๏ช 1/ 4
๏ช
๏ช 1/ 4
๏ช๏ซ 1/ 4
1/ 4
๏ญ3/ 4
1/ 4
1/ 4
1/ 4
1/ 3๏น
1/ 3๏บ๏บ
1/ 3๏บ
๏บ
๏ญ2 / 3๏ป
1/ 4
1/ 4
๏ญ3/ 4
1/ 4
1/ 4
1/ 4
1/ 4
1/ 4
๏ญ3/ 4
1/ 4
1/ 4 ๏น
1/ 4 ๏บ๏บ
1/ 4 ๏บ
๏บ
1/ 4 ๏บ
๏ญ3/ 4 ๏บ๏ป
Copyright ยฉ 2016 Pearson Education, Inc.
2-76
CHAPTER 2
โข
๏ฉ0
๏ช1
๏ช
๏ช1
A6 ๏ฝ ๏ช
๏ช1
๏ช1
๏ช
๏ช๏ซ 1
Matrix Algebra
1
0
1
1
1
1
1
1
0
1
1
1
1
1
1
0
1
1
1
1
1
1
0
1
1๏น
1๏บ๏บ
1๏บ
๏บ,
1๏บ
1๏บ
๏บ
0 ๏บ๏ป
๏ฉ ๏ญ4 / 5
๏ช 1/ 5
๏ช
๏ช 1/ 5
๏ญ1
A6 ๏ฝ ๏ช
๏ช 1/ 5
๏ช 1/ 5
๏ช
๏ช๏ซ 1/ 5
1/ 5
๏ญ4 / 5
1/ 5
1/ 5
1/ 5
1/ 5
1/ 5
1/ 5
๏ญ4 / 5
1/ 5
1/ 5
1/ 5
1/ 5
1/ 5
1/ 5
๏ญ4 / 5
1/ 5
1/ 5
1/ 5
1/ 5
1/ 5
1/ 5
๏ญ4 / 5
1/ 5
1/ 5๏น
1/ 5๏บ๏บ
1/ 5๏บ
๏บ
1/ 5๏บ
1/ 5๏บ
๏บ
๏ญ4 / 5๏บ๏ป
The construction of A6 and the appearance of its inverse suggest that the inverse is related to I6. In fact,
A6๏ญ1 ๏ซ I 6 is 1/5 times the 6ร6 matrix of ones. Let J denotes the nรn matrix of ones. The conjecture is:
1
๏ J ๏ญ In
n ๏ญ1
Proof: (Not required) Observe that J 2 = nJ and An J = (J โ I ) J = J 2 โ J = (n โ 1) J. Now compute
An((n โ 1)โ1J โ I) = (n โ 1)โ1 An J โ An = J โ (J โ I) = I. Since An is square, An is invertible and its inverse
is (n โ 1)โ1J โ I.
An = J โ In
and
An๏ญ1 ๏ฝ
Copyright ยฉ 2016 Pearson Education, Inc.
Document Preview (76 of 455 Pages)
User generated content is uploaded by users for the purposes of learning and should be used following SchloarOn's honor code & terms of service.
You are viewing preview pages of the document. Purchase to get full access instantly.
-37%
Solution Manual for Linear Algebra and Its Applications, 5th Edition
$18.99 $29.99Save:$11.00(37%)
24/7 Live Chat
Instant Download
100% Confidential
Store
Emma Johnson
0 (0 Reviews)
Best Selling
The World Of Customer Service, 3rd Edition Test Bank
$18.99 $29.99Save:$11.00(37%)
Chemistry: Principles And Reactions, 7th Edition Test Bank
$18.99 $29.99Save:$11.00(37%)
Test Bank for Hospitality Facilities Management and Design, 4th Edition
$18.99 $29.99Save:$11.00(37%)
Solution Manual for Designing the User Interface: Strategies for Effective Human-Computer Interaction, 6th Edition
$18.99 $29.99Save:$11.00(37%)
Data Structures and Other Objects Using C++ 4th Edition Solution Manual
$18.99 $29.99Save:$11.00(37%)
2023-2024 ATI Pediatrics Proctored Exam with Answers (139 Solved Questions)
$18.99 $29.99Save:$11.00(37%)