Preview Extract
2.1 – Matrix Operations
Notes: The definition here of a matrix product AB gives the proper view of AB for nearly all matrix
calculations. (The dual fact about the rows of A and the rows of AB is seldom needed, mainly because
vectors here are usually written as columns.) I assign Exercise 13 and most of Exercises 25โ30 to
reinforce the definition of AB.
Exercises 31 and 32 are used in the proof of the Invertible Matrix Theorem, in Section 2.3. Exercises
31โ33 are mentioned in a footnote in Section 2.2. A class discussion of the solutions of Exercises 31โ33
can provide a transition to Section 2.2. Or, these exercises could be assigned after starting Section 2.2.
Exercises 35 and 36 are optional, but they are mentioned in Example 4 of Section 2.4. Outer products in
the spectral decomposition of a symmetric matrix, in Section 7.1. Exercises 37โ41 provide good training for
mathematics majors.
When I talk with my colleagues in Engineering, the first thing they tell me is that they wish students in
their classes could multiply matrices. Exercises 49โ52 provide simple examples of where multiplication is
used in high-tech applications.
๏ฉ2
๏ซ4
1. โ2 A = (โ2) ๏ช
0
โ3
โ1๏น ๏ฉโ4
=
2๏บ๏ป ๏ช๏ซ โ8
0
6
2๏น
. Next, use B โ 2A = B + (โ2A):
โ4๏บ๏ป
1๏น ๏ฉโ4 0
2๏น ๏ฉ 3 โ5
3๏น
๏ฉ7 โ5
B โ 2A = ๏ช
+๏ช
=๏ช
๏บ
๏บ
2 โ7๏บ๏ป
๏ซ 1 โ4 โ3๏ป ๏ซ โ8 6 โ4๏ป ๏ซโ7
The product AC is not defined because the number of columns of A does not match the number of
1โ
5 + 2 โ
4๏น ๏ฉ 1 13๏น
๏ฉ 1 2๏น ๏ฉ 3 5๏น ๏ฉ 1โ
3 + 2(โ1)
=๏ช
rows of C. CD = ๏ช
๏บ
๏ช
๏บ
๏บ=๏ช
๏บ . For mental
๏ซโ2 1๏ป ๏ซโ1 4๏ป ๏ซโ2 โ
3 + 1(โ1) โ2 โ
5 + 1โ
4๏ป ๏ซโ7 โ6๏ป
computation, the row-column rule is probably easier to use than the definition.
๏ฉ2
๏ซ4
2. A + 2B = ๏ช
0
โ3
โ1๏น
๏ฉ7 โ5
+ 2๏ช
๏บ
2๏ป
๏ซ 1 โ4
1๏น ๏ฉ2 + 14
=
โ3๏บ๏ป ๏ช๏ซ 4 + 2
0 โ 10
โ3 โ 8
โ1 + 2๏น ๏ฉ16
=
2 โ 6๏บ๏ป ๏ช๏ซ 6
โ10
1๏น
โ11 โ4๏บ๏ป
The expression 3C โ E is not defined because 3C has 2 columns and โE has only 1 column.
๏ฉ 1 2๏น ๏ฉ7 โ5
CB = ๏ช
๏บ๏ช
๏ซโ2 1๏ป ๏ซ 1 โ4
1๏น ๏ฉ 1โ
7 + 2 โ
1 1(โ5) + 2(โ4)
=
โ3๏บ๏ป ๏ช๏ซโ2 โ
7 + 1โ
1 โ2(โ5) + 1(โ4)
1โ
1 + 2(โ3) ๏น ๏ฉ 9
=
โ2 โ
1 + 1(โ3) ๏บ๏ป ๏ช๏ซโ13
2-1
Copyright ยฉ 2021 Pearson Education, Inc.
โ13
6
โ5๏น
โ5๏บ๏ป
2-2
Chapter 2
๏ฌ
Matrix Algebra
The product EB is not defined because the number of columns of E does not match the number of
rows of B.
๏ฉ3
๏ซ0
3. 3I 2 โ A = ๏ช
โ1๏น ๏ฉ3 โ 4
=
โ2๏บ๏ป ๏ช๏ซ0 โ 5
0๏น ๏ฉ 4
โ
3๏บ๏ป ๏ช๏ซ 5
โ1๏น ๏ฉ12
=
โ2๏บ๏ป ๏ช๏ซ15
๏ฉ4
(3I 2 ) A = 3( I 2 A) = 3 ๏ช
๏ซ5
๏ฉ3
(3I 2 ) A = ๏ช
๏ซ0
0๏น ๏ฉ 4
3๏บ๏ป ๏ช๏ซ 5
๏ฉ 9
๏ช
4. A โ 5I 3 = ๏ช โ8
๏ช๏ซ โ4
โ1
0
๏ฉ 9
(5I3 ) A = 5( I 3 A) = 5 A = 5 ๏ช๏ช โ8
๏ช๏ซ โ4
โ1
๏ฉ5 0 0๏น ๏ฉ 9
(5I3 ) A = ๏ช๏ช0 5 0๏บ๏บ ๏ช๏ช โ8
๏ช๏ซ0 0 5๏บ๏ป ๏ช๏ซ โ4
15๏น
๏ฉ 45 โ5
๏ช
= ๏ช โ40 35 โ15๏บ๏บ
๏ช๏ซ โ20
5
40๏บ๏ป
๏ฉ โ1
๏ช
5. a. Ab 1 = ๏ช 5
๏ช๏ซ 2
AB = [ Ab1
๏ฉ โ1
๏ช 5
b. ๏ช
๏ช๏ซ 2
๏ฉ 4
๏ช
6. a. Ab1 = ๏ช โ3
๏ช๏ซ 3
3๏น ๏ฉ 45
โ3๏บ๏บ = ๏ช๏ช โ40
8๏บ๏ป ๏ช๏ซ โ20
7
1
โ1
2
1
โ5
35
5
3๏น ๏ฉ 5 โ
9 + 0 + 0
โ3๏บ๏บ = ๏ช๏ช0 + 5(โ8) + 0
8๏บ๏ป ๏ช๏ซ0 + 0 + 5(โ4)
โ1
๏ฉ โ7
Ab 2 ] = ๏ช๏ช 7
๏ช๏ซ 12
3(โ1) + 0๏น ๏ฉ12
=
0 + 3(โ2) ๏บ๏ป ๏ช๏ซ15
0๏น ๏ฉ 4
0๏บ๏บ = ๏ช๏ช โ8
5๏บ๏ป ๏ช๏ซ โ4
5
0
7
1
2๏น
๏ฉ โ7 ๏น
๏ฉ 3๏น ๏ช ๏บ
๏บ
4๏บ ๏ช ๏บ = ๏ช 7 ๏บ ,
โ2
โ3๏บ๏ป ๏ซ ๏ป ๏ช๏ซ 12๏บ๏ป
2๏น
๏ฉ 3
4๏บ๏บ ๏ช
โ2
โ3๏บ๏ป ๏ซ
โ3๏น
, or
โ6๏บ๏ป
โ1๏น ๏ฉ3 โ
4 + 0
=
โ2๏บ๏ป ๏ช๏ซ 0 + 3 โ
5
3๏น ๏ฉ 5
โ3๏บ๏บ โ ๏ช๏ช0
8๏บ๏ป ๏ช๏ซ0
7
1
0 โ (โ1) ๏น ๏ฉ โ1 1๏น
=
3 โ (โ2) ๏บ๏ป ๏ช๏ซโ5 5๏บ๏ป
๏ฉ โ1
Ab 2 = ๏ช๏ช 5
๏ช๏ซ 2
โ3๏น
โ6๏บ๏ป
3๏น
โ3๏บ๏บ
3๏บ๏ป
15๏น
โ15๏บ๏บ , or
40๏บ๏ป
5(โ1) + 0 + 0
5 โ
3 + 0 + 0๏น
0 + 5 โ
7 + 0 0 + 5(โ3) + 0๏บ๏บ
0 + 0 + 5 โ
1
0 + 0 + 5 โ
8๏บ๏ป
2๏น
๏ฉ 6๏น
๏ฉ โ4๏น ๏ช
๏บ
4๏บ ๏ช ๏บ = ๏ช โ16 ๏บ๏บ
1
โ3๏บ๏ป ๏ซ ๏ป ๏ช๏ซ โ11๏บ๏ป
6๏น
โ16 ๏บ๏บ
โ11๏บ๏ป
๏ฉ โ1 โ
3 + 2(โ2)
โ4๏น ๏ช
= 5 โ
3 + 4(โ2)
1๏ป๏บ ๏ช
๏ช๏ซ 2 โ
3 โ 3(โ2)
โ2๏น
๏ฉ โ4๏น
๏ฉ 1๏น ๏ช ๏บ
๏บ
0๏บ ๏ช ๏บ = ๏ช โ3๏บ ,
4
5๏บ๏ป ๏ซ ๏ป ๏ช๏ซ 23๏บ๏ป
โ1(โ4) + 2 โ
1๏น ๏ฉ โ7
5(โ4) + 4 โ
1๏บ๏บ = ๏ช๏ช 7
2(โ4) โ 3 โ
1๏บ๏ป ๏ช๏ซ 12
๏ฉ 4
Ab 2 = ๏ช๏ช โ3
๏ช๏ซ 3
โ2๏น
๏ฉ 14 ๏น
๏ฉ 3๏น ๏ช ๏บ
๏บ
0๏บ ๏ช ๏บ = ๏ช โ9 ๏บ
โ1
5๏บ๏ป ๏ซ ๏ป ๏ช๏ซ 4 ๏บ๏ป
Copyright ยฉ 2021 Pearson Education, Inc.
6๏น
โ16๏บ๏บ
โ11๏บ๏ป
2.1 – Matrix Operations
๏ฉ โ4
Ab 2 ] = ๏ช๏ช โ3
๏ซ๏ช 23
AB = [ Ab1
๏ฉ 4
๏ช
b. ๏ช โ3
๏ซ๏ช 3
โ2๏น
๏ฉ1
0๏บ๏บ ๏ช
4
5๏ป๏บ ๏ซ
2-3
14๏น
โ9๏บ๏บ
4๏ป๏บ
๏ฉ 4 โ
1 โ 2 โ
4
3๏น ๏ช
= โ3 โ
1 + 0 โ
4
โ1๏ป๏บ ๏ช
๏ซ๏ช 3 โ
1 + 5 โ
4
4 โ
3 โ 2(โ1) ๏น ๏ฉ โ4
โ3 โ
3 + 0(โ1) ๏บ๏บ = ๏ช๏ช โ3
3 โ
3 + 5(โ1) ๏ป๏บ ๏ซ๏ช 23
14๏น
โ9๏บ๏บ
4๏ป๏บ
7. Since A has 3 columns, B must match with 3 rows. Otherwise, AB is undefined. Since AB has 7
columns, so does B. Thus, B is 3ร7.
8. The number of rows of B matches the number of rows of BC, so B has 3 rows.
๏ฉ 2
๏ซโ3
5๏น ๏ฉ4
1๏บ๏ป ๏ช๏ซ 3
9. AB = ๏ช
โ5๏น ๏ฉ 23
=
k ๏บ๏ป ๏ช๏ซโ9
โ10 + 5k ๏น
๏ฉ4
, while BA = ๏ช
๏บ
15 + k ๏ป
๏ซ3
โ5๏น ๏ฉ 2
k ๏บ๏ป ๏ช๏ซโ3
5๏น ๏ฉ 23
=
1๏บ๏ป ๏ช๏ซ6 โ 3k
15 ๏น
.
15 + k ๏บ๏ป
Then AB = BA if and only if โ10 + 5k = 15 and โ9 = 6 โ 3k, which happens if and only if k = 5.
๏ฉ 2
๏ซโ4
10. AB = ๏ช
๏ฉ1
๏ช
11. AD = ๏ช1
๏ช๏ซ1
๏ฉ2
DA = ๏ช๏ช 0
๏ช๏ซ 0
โ3๏น ๏ฉ8
6๏บ๏ป ๏ช๏ซ5
1
2
4
0
3
0
4๏น ๏ฉ 1 โ7 ๏น
๏ฉ 2
=๏ช
, AC = ๏ช
๏บ
๏บ
5๏ป ๏ซโ2 14๏ป
๏ซโ4
1๏น ๏ฉ 2
3๏บ๏บ ๏ช๏ช 0
5๏บ๏ป ๏ช๏ซ 0
0
0๏น ๏ฉ1
0๏บ๏บ ๏ช๏ช1
5๏บ๏ป ๏ช๏ซ1
1
3
0
2
4
0๏น ๏ฉ 2
3
๏บ
๏ช
0๏บ = ๏ช 2 6
5๏บ๏ป ๏ช๏ซ 2 12
5๏น
15๏บ๏บ
25๏บ๏ป
1๏น ๏ฉ 2
3๏บ๏บ = ๏ช๏ช 3
5๏บ๏ป ๏ช๏ซ 5
2๏น
9๏บ๏บ
25๏บ๏ป
2
6
20
โ3๏น ๏ฉ5
6๏บ๏ป ๏ช๏ซ3
โ2๏น ๏ฉ 1 โ7๏น
=
1๏บ๏ป ๏ช๏ซโ2 14๏บ๏ป
Right-multiplication (that is, multiplication on the right) by the diagonal matrix D multiplies each
column of A by the corresponding diagonal entry of D. Left-multiplication by D multiplies each row
of A by the corresponding diagonal entry of D. To make AB = BA, one can take B to be a multiple of
I3. For instance, if B = 4I3, then AB and BA are both the same as 4A.
12. Consider B = [b1 b2]. To make AB = 0, one needs Ab1 = 0 and Ab2 = 0. By inspection of A, a
suitable
๏ฉ 2๏น
๏ซ 1๏ป
๏ฉ 2๏น
๏ซ 1๏ป
๏ฉ2 6๏น
๏บ.
๏ซ 1 3๏ป
b1 is ๏ช ๏บ , or any multiple of ๏ช ๏บ . Example: B = ๏ช
13. Use the definition of AB written in reverse order: [Ab1 โ
โ
โ
Abp] = A[b1 โ
โ
โ
bp]. Thus
[Qr1 โ
โ
โ
Qrp] = QR, when R = [r1 โ
โ
โ
rp].
14. By definition, UQ = U[q1 โ
โ
โ
q4] = [Uq1 โ
โ
โ
Uq4]. From Example 6 of Section 1.8, the vector
Uq1 lists the total costs (material, labor, and overhead) corresponding to the amounts of products B
and
C specified in the vector q1. That is, the first column of UQ lists the total costs for materials, labor,
and overhead used to manufacture products B and C during the first quarter of the year. Columns 2,
3,
Copyright ยฉ 2021 Pearson Education, Inc.
2-4
Chapter 2
๏ฌ
Matrix Algebra
and 4 of UQ list the total amounts spent to manufacture B and C during the 2nd, 3rd, and 4th quarters,
respectively.
15. False. See the definition of AB.
16. False. AB must be a 3ร3 matrix, but the formula for AB implies that it is 3ร1. The plus signs should
be just spaces (between columns). This is a common mistake.
17. False. The roles of A and B should be reversed in the second half of the statement. See the box after
Example 3.
18. True. See the box after Example 6.
19. True. See Theorem 2(b), read right to left.
20. True. See Theorem 3(b), read right to left.
21. False. The left-to-right order of B and C cannot be changed, in general.
22. False. See Theorem 3(d).
23. False. The phrase โin the same orderโ should be โin the reverse order.โ See the box after Theorem 3.
24. True. This general statement follows from Theorem 3(b).
2
๏ฉโ1
๏ซ 6 โ9
25. Since ๏ช
โ1๏น
= AB = [ Ab1
3๏บ๏ป
Ab2
Ab3 ] , the first column of B satisfies the equation
๏ฉ โ1๏น
๏ฉ 1 โ2 โ1๏น ๏ฉ 1 0
Ax = ๏ช ๏บ . Row reduction: [ A Ab1 ] ~ ๏ช
~
5 6๏บ๏ป ๏ช๏ซ0 1
๏ซ 6๏ป
๏ซโ2
2 ๏น ๏ฉ 1 0 โ 8๏น
๏ฉ 1 โ2
๏ฉโ8๏น
[ A Ab2 ] ~ ๏ชโ2 5 โ9๏บ ~ ๏ช0 1 โ5๏บ and b2 = ๏ชโ5๏บ .
๏ซ
๏ป ๏ซ
๏ป
๏ซ ๏ป
7๏น
๏ฉ7 ๏น
. So b1 = ๏ช ๏บ . Similarly,
๏บ
4๏ป
๏ซ4๏ป
Note: An alternative solution of Exercise 25 is to row reduce [A Ab1 Ab2] with one sequence of row
operations. This observation can prepare the way for the inversion algorithm in Section 2.2.
26. The first two columns of AB are Ab1 and Ab2. They are equal since b1 and b2 are equal.
27. (A solution is in the text). Write B = [b1 b2 b3]. By definition, the third column of AB is Ab3. By
hypothesis, b3 = b1 + b2. So Ab3 = A(b1 + b2) = Ab1 + Ab2, by a property of matrix-vector
multiplication. Thus, the third column of AB is the sum of the first two columns of AB.
28. The second column of AB is also all zeros because Ab2 = A0 = 0.
29. Let bp be the last column of B. By hypothesis, the last column of AB is zero. Thus, Abp = 0.
However,
bp is not the zero vector, because B has no column of zeros. Thus, the equation Abp = 0 is a linear
dependence relation among the columns of A, and so the columns of A are linearly dependent.
Note: The text answer for Exercise 29 is, โThe columns of A are linearly dependent. Why?โ The Study
Guide supplies the argument above in case a student needs help.
Copyright ยฉ 2021 Pearson Education, Inc.
2.1 – Matrix Operations
2-5
30. If the columns of B are linearly dependent, then there exists a nonzero vector x such that Bx = 0.
From this, A(Bx) = A0 and (AB)x = 0 (by associativity). Since x is nonzero, the columns of AB must
be linearly dependent.
31. If x satisfies Ax = 0, then CAx = C0 = 0 and so Inx = 0 and x = 0. This shows that the equation Ax = 0
has no free variables. So every variable is a basic variable and every column of A is a pivot column.
(A variation of this argument could be made using linear independence and Exercise 36 in Section
1.7.) Since each pivot is in a different row, A must have at least as many rows as columns.
32. Take any b in ๏ m . By hypothesis, ADb = Imb = b. Rewrite this equation as A(Db) = b. Thus, the
vector x = Db satisfies Ax = b. This proves that the equation Ax = b has a solution for each b in ๏ m .
By Theorem 4 in Section 1.4, A has a pivot position in each row. Since each pivot is in a different
column, A must have at least as many columns as rows.
33. By Exercise 31, the equation CA = In implies that (number of rows in A) > (number of columns), that
is,
m > n. By Exercise 32, the equation AD = Im implies that (number of rows in A) < (number of
columns), that is, m 1.
Note: Exercise 25 is good for mathematics and computer science students. The solution of Exercise 25 in
the Study Guide shows students how to use the principle of induction. The Study Guide also has an
Copyright ยฉ 2021 Pearson Education, Inc.
2.4 – Partitioned Matrices
2-29
appendix on โThe Principle of Induction,โ at the end of Section 2.4. The text presents more applications
of induction in Section 3.2 and in the Supplementary Exercises for Chapter 3.
๏ฉ1
๏ช1
๏ช
26. Let An = ๏ช1
๏ช
๏ช๏
๏ช๏ซ1
0
1
1
1
0 ๏
0
1
๏
1 ๏
0๏น
๏ฉ 1
๏บ
๏ช โ1
0๏บ
๏ช
0 ๏บ , Bn = ๏ช 0
๏บ
๏ช
๏บ
๏ช ๏
๏ช๏ซ 0
1๏บ๏ป
0
1
โ1
0
0
1
๏
๏
0๏น
0 ๏บ๏บ
0๏บ .
๏บ
๏บ
1๏บ๏ป
๏
๏
โ1
By direct computation A2B2 = I2. Assume that for n = k, the matrix AkBk is Ik, and write
๏ฉ1
Ak +1 = ๏ช
๏ซv
๏ฉ1
0T ๏น
๏บ and Bk +1 = ๏ช
Ak ๏ป
๏ซw
0T ๏น
๏บ
Bk ๏ป
where v and w are in ๏ k , vT = [1 1 โ
โ
โ
1], and wT = [โ1 0 โ
โ
โ
0]. Then
๏ฉ1
Ak +1 Bk +1 = ๏ช
๏ซv
0T ๏น ๏ฉ 1
๏บ๏ช
Ak ๏ป ๏ซ w
T
0T ๏น ๏ฉ 1 + 0 w
=
๏บ ๏ช
Bk ๏ป ๏ช๏ซ v + Ak w
0T + 0T B k ๏น ๏ฉ 1
๏บ=๏ช
v0T + Ak Bk ๏บ๏ป ๏ซ 0
0T ๏น
๏บ = I k +1
Ik ๏ป
The (2,1)-entry is 0 because v equals the first column of Ak., and Akw is โ1 times the first column of
Ak. By the principle of induction, AnBn = In for all n > 2. Since An and Bn are square, the IMT shows
โ1
that these matrices are invertible, and Bn = An .
Note: An induction proof can also be given using partitions with the form shown below. The details are
slightly more complicated.
๏ฉ Ak
Ak +1 = ๏ช T
๏ซv
0๏น
๏ฉ Bk
๏บ and Bk +1 = ๏ช T
1๏ป
๏ซw
๏ฉ Ak
Ak +1 Bk +1 = ๏ช T
๏ซv
0 ๏น ๏ฉ Bk
๏บ๏ช
1๏ป ๏ซwT
0๏น
๏บ
1๏ป
0 ๏น ๏ฉ Ak Bk + 0w T
๏บ=๏ช
1 ๏ป ๏ช๏ซ v T Bk + w T
Ak 0 + 0 ๏น ๏ฉ I k
๏บ=๏ช T
v T 0 + 1 ๏บ๏ป ๏ซ 0
0๏น
๏บ = I k +1
1๏ป
The (2,1)-entry is 0T because vT times a column of Bk equals the sum of the entries in the column, and
all of such sums are zero except the last, which is 1. So vTBk is the negative of wT. By the principle of
induction, AnBn = In for all n > 2. Since An and Bn are square, the IMT shows that these matrices are
โ1
invertible, and Bn = An .
27. First, visualize a partition of A as a 2ร2 blockโdiagonal matrix, as below, and then visualize the
(2,2)-block itself as a block-diagonal matrix. That is,
๏ฉ1
๏ช3
๏ช
A = ๏ช0
๏ช
๏ช0
๏ช๏ซ 0
2
5
0
0
0
0
0
2
0
0
0
0
0
7
5
0๏น
0 ๏บ๏บ
๏ฉ A11
0๏บ = ๏ช
๏บ ๏ช 0
8๏บ ๏ซ
6 ๏บ๏ป
๏ฉ2
0๏น
๏ช
, where A22 = ๏ช 0
A22 ๏บ๏ป๏บ
๏ช๏ซ 0
0
7
5
0๏น
๏ฉ2
8๏บ๏บ = ๏ช
0
6๏บ๏ป ๏ซ
Copyright ยฉ 2021 Pearson Education, Inc.
0๏น
B ๏บ๏ป
2-30
Chapter 2
๏ฌ
Matrix Algebra
๏ฉ 3
Observe that B is invertible and Bโ1 = ๏ช
๏ซโ2.5
0
๏ฉ.5
๏น ๏ฉ.5
๏ช
โ1
3
โ4 ๏บ๏บ = ๏ช 0
invertible, and A22 = ๏ช
๏ช
0
๏ช
๏บ ๏ช๏ซ 0
โ
2.5
3.5
๏ซ
๏ป
โ4 ๏น
. By Exercise 15, the block diagonal matrix A22 is
3.5๏บ๏ป
0
3
โ2.5
0 ๏น
โ4 ๏บ๏บ
3.5๏บ๏ป
๏ฉโ5
Next, observe that A11 is also invertible, with inverse ๏ช
๏ซ 3
invertible, and its inverse is block diagonal:
๏ฉ A โ1
Aโ1 = ๏ช 11
๏ช๏ซ 0
๏ฉ โ5 2
๏ช 3 โ1
0 ๏น ๏ช
๏บ=๏ช
โ1
A22
๏บ๏ป ๏ช 0
๏ช
๏ช
๏ซ
0
.5
0
0
0
3
โ2.5
๏น ๏ฉ โ5
๏บ ๏ช
๏บ ๏ช 3
0 ๏บ=๏ช 0
๏บ
โ4 ๏บ ๏ช๏ช 0
3.5 ๏บ๏ป ๏ช๏ซ 0
2๏น
. By Exercise 15, A itself is
โ1๏บ๏ป
2
โ1
0
0
0
0
0
.5
0
0
0
0
0
3
โ2.5
0๏น
0 ๏บ๏บ
0๏บ
๏บ
โ4 ๏บ
3.5 ๏บ๏ป
28. This exercise and the next, which involve large matrices, are more appropriate for MATLAB, Maple,
and Mathematica, than for the graphic calculators.
a. Display the submatrix of A obtained from rows 15 to 20 and columns 5 to 10.
A(15:20, 5:10)
MATLAB:
Maple:
submatrix(A, 15..20, 5..10)
Mathematica:
Take[ A, {15,20}, {5,10} ]
b. Insert a 5ร10 matrix B into rows 10 to 14 and columns 20 to 29 of matrix A:
MATLAB:
A(10:14, 20:29) = B ;
The semicolon suppresses output
display.
Maple:
copyinto(B, A, 10, 20):
The colon suppresses output display.
Mathematica:
For [ i=10, i<=14, i++,
For [ j=20, j p. For j = 1, โฆ, q, the vector aj is
in W. Since the columns of B span W, the vector aj is in the column space of B. That is, aj = Bcj
for some vector cj of weights. Note that cj is in ๏ p because B has p columns.
b. Let C = [c1 โ
โ
โ
cq]. Then C is a pรq matrix because each of the q columns is in ๏ p .
By hypothesis, q is larger than p, so C has more columns than rows. By a theorem, the columns of
C are linearly dependent and there exists a nonzero vector u in ๏ q such that Cu = 0.
Copyright ยฉ 2021 Pearson Education, Inc.
2-72
Chapter 2
๏ฌ
Matrix Algebra
c. From part (a) and the definition of matrix multiplication A = [a1 โ
โ
โ
aq] = [Bc1 โ
โ
โ
Bcq] = BC.
From part (b), Au = (BC ) u = B(Cu) = B0 = 0. Since u is nonzero, the columns of A are linearly
dependent.
36. If ๏ contained more vectors than ๏ , then ๏ would be linearly dependent, by Exercise 35, because
๏ spans W. Repeat the argument with ๏ and ๏ interchanged to conclude that ๏ cannot contain
more vectors than ๏ .
37. Apply the matrix command rref to the matrix [v1 v2
๏ฉ 11
๏ช โ5
x]: ๏ช
๏ช 10
๏ช
๏ซ๏ช 7
14
โ8
13
10
19 ๏น ๏ฉ 1
โ13๏บ๏บ ๏ช๏ช 0
~
18 ๏บ ๏ช 0
๏บ ๏ช
15 ๏ป๏บ ๏ซ๏ช 0
0
1
0
0
โ1.667 ๏น
2.667 ๏บ๏บ
0 ๏บ
๏บ
0 ๏ป๏บ
The equation c1v1 + c2v2 = x is consistent, so x is in the subspace H. The decimal approximations
suggest c1 = โ5/3 and c2 = 8/3, and it can be checked that these values are precise. Thus, the Bcoordinate of x is (โ5/3, 8/3).
38. Apply the matrix command rref to the matrix [v1 v2 v3 x]:
๏ฉ โ6
๏ช 4
๏ช
๏ช โ9
๏ช
๏ซ๏ช 4
8
โ3
7
โ3
โ9
5
โ8
3
4๏น ๏ฉ 1
7 ๏บ๏บ ๏ช๏ช 0
~
โ8 ๏บ ๏ช 0
๏บ ๏ช
3๏ป๏บ ๏ซ๏ช 0
0
1
0
0
0
0
1
0
3๏น
5๏บ๏บ
2๏บ
๏บ
0 ๏ป๏บ
The first three columns of [v1 v2 v3 x] are pivot columns, so v1, v2 and v3 are linearly independent.
Thus v1, v2 and v3 form a basis B for the subspace H which they span. View [v1 v2 v3 x] as an
augmented matrix for c1v1 + c2v2 + c3v3 = x. The reduced echelon form shows that x is in H and
๏ฉ 3๏น
๏ช ๏บ
[x]B = ๏ช 5๏บ .
๏ช๏ซ 2 ๏บ๏ป
Notes: The Study Guide for Section 2.9 contains a complete list of the statements in the Invertible Matrix
Theorem that have been given so far. The format is the same as that used in Section 2.3, with three
columns: statements that are logically equivalent for any mรn matrix and are related to existence
concepts, those that are equivalent only for any nรn matrix, and those that are equivalent for any nรp
matrix and are related to uniqueness concepts. Four statements are included that are not in the textโs
official list of statements, to give more symmetry to the three columns.
The Study Guide section also contains directions for making a review sheet for โdimensionโ and
โrank.โ
Chapter 2 – Supplementary Exercises
1. True. If A and B are mรn matrices, then BT has as many rows as A has columns, so ABT is defined.
Also, ATB is defined because AT has m columns and B has m rows.
2. False. B must have 2 columns. A has as many columns as B has rows.
3. True. The ith row of A has the form (0, โฆ, di, โฆ, 0). So the ith row of AB is (0, โฆ, di, โฆ, 0)B, which
is di times the ith row of B.
Copyright ยฉ 2021 Pearson Education, Inc.
Chapter 2 – Supplementary Exercises
2-73
4. False. Take the zero matrix for B. Or, construct a matrix B such that the equation Bx = 0 has
nontrivial solutions, and construct C and D so that C โ D and the columns of C โ D satisfy the
equation Bx = 0. Then B(C โ D) = 0 and BC = BD.
๏ฉ 1 0๏น
๏ฉ0 0 ๏น
5. False. Counterexample: A = ๏ช
and C = ๏ช
๏บ
๏บ.
๏ซ0 0 ๏ป
๏ซ0 1๏ป
6. False. (A + B)(A โ B) = A2 โ AB + BA โ B2. This equals A2 โ B2 if and only if A commutes with B.
7. True. An nรn replacement matrix has n + 1 nonzero entries. The nรn scale and interchange matrices
have n nonzero entries.
8. True. The transpose of an elementary matrix is an elementary matrix of the same type.
9. True. An nรn elementary matrix is obtained by a row operation on In.
10. False. Elementary matrices are invertible, so a product of such matrices is invertible. But not every
square matrix is invertible.
11. True. If A is 3ร3 with three pivot positions, then A is row equivalent to I3.
12. False. A must be square in order to conclude from the equation AB = I that A is invertible.
13. False. AB is invertible, but (AB)โ1 = Bโ1Aโ1, and this product is not always equal to Aโ1Bโ1.
14. True. Given AB = BA, left-multiply by Aโ1 to get B = Aโ1BA, and then right-multiply by Aโ1 to obtain
BAโ1 = Aโ1B.
15. False. The correct equation is (rA)โ1 = rโ1Aโ1, because (rA)(rโ1Aโ1) = (rrโ1)(AAโ1) = 1โ
I = I.
16. C = (C โ 1 ) โ1 =
1 ๏ฉ 7
โ2 ๏ช๏ซโ6
๏ฉ0
17. A = ๏ช 1
๏ช
๏ช๏ซ 0
0๏น
0 ๏บ๏บ ,
0 ๏บ๏ป
0
0
1
๏ฉ0
A = A โ
A = ๏ช๏ช1
๏ซ๏ช 0
3
2
โ5๏น ๏ฉโ7 / 2
=
4๏บ๏ป ๏ช๏ซ 3
5/ 2๏น
โ2 ๏บ๏ป
๏ฉ0 0 0 ๏น ๏ฉ0 0 0 ๏น ๏ฉ0
A = ๏ช๏ช 1 0 0 ๏บ๏บ ๏ช๏ช1 0 0 ๏บ๏บ = ๏ช๏ช 0
๏ช๏ซ 0 1 0 ๏บ๏ป ๏ช๏ซ 0 1 0 ๏บ๏ป ๏ช๏ซ1
0 0๏น ๏ฉ0 0 0๏น ๏ฉ0 0 0๏น
0 0 ๏บ๏บ ๏ช๏ช 0 0 0 ๏บ๏บ = ๏ช๏ช 0 0 0 ๏บ๏บ
1 0 ๏ป๏บ ๏ซ๏ช1 0 0 ๏บ๏ป ๏ช๏ซ 0 0 0 ๏ป๏บ
2
0
0
0
0๏น
0 ๏บ๏บ
0 ๏บ๏ป
Next, ( I โ A)( I + A + A2 ) = I + A + A2 โ A( I + A + A2 ) = I + A + A2 โ A โ A2 โ A3 = I โ A3 .
Since A3 = 0, ( I โ A)( I + A + A2 ) = I .
18. From Exercise 17, the inverse of I โ A is probably I + A + A2 + โ
โ
โ
+ Anโ1. To verify this, compute
( I โ A)( I + A + ๏ + Anโ1 ) = I + A + ๏ + Anโ1 โ A( I + A + ๏ + Anโ1 ) = I โ AAnโ1 = I โ An
If An = 0, then the matrix B = I + A + A2 + โ
โ
โ
+ Anโ1 satisfies (I โ A)B = I. Since I โ A and B are
square, they are invertible by the Invertible Matrix Theorem, and B is the inverse of I โ A.
19. A2 = 2A โ I. Multiply by A: A3 = 2A2 โ A. Substitute A2 = 2A โ I: A3 = 2(2A โ I ) โ A = 3A โ 2I.
Multiply by A again: A4 = A(3A โ 2I) = 3A2 โ 2A. Substitute the identity A2 = 2A โ I again.
Finally, A4 = 3(2A โ I) โ 2A = 4A โ 3I.
Copyright ยฉ 2021 Pearson Education, Inc.
2-74
Chapter 2
๏ฉ1
20. Let A = ๏ช
๏ซ0
BA.
๏ฌ
Matrix Algebra
0๏น
๏ฉ0 1๏น
๏ฉ 0 1๏น
and B = ๏ช
. By direct computation, A2 = I, B2 = I, and AB = ๏ช
๏บ =โ
๏บ
๏บ
โ1๏ป
๏ซ 1 0๏ป
๏ซโ1 0๏ป
21. (Partial answer in Study Guide) Since Aโ1B is the solution of AX = B, row reduction of [A B] to
[I X] will produce X = Aโ1B. See Exercise 22 in Section 2.2.
[A
๏ฉ1
B ] = ๏ช๏ช 2
๏ช๏ซ 1
๏ฉ1 3 8
~ ๏ช๏ช0 1 3
๏ช๏ซ0 0 1
๏ฉ 10 โ1๏น
๏ช 9 10๏บ
๏ช
๏บ.
๏ช๏ซ โ5 โ3๏บ๏ป
โ3
1
3
5๏น ๏ฉ 1
5๏บ๏บ ~ ๏ช๏ช0
4๏บ๏ป ๏ช๏ซ0
3
โ2
โ1
8
โ5
โ3
5๏น ๏ฉ 1
1๏บ๏บ ~ ๏ช๏ช0
โ3๏บ๏ป ๏ช๏ซ0
3
1
0
37
9
โ5
29๏น ๏ฉ 1
10๏บ๏บ ~ ๏ช๏ช0
โ3๏บ๏ป ๏ช๏ซ0
3 8
4 11
2 5
โ3
โ6
โ5
0
0
1
โ3
7
6
5๏น ๏ฉ 1
โ5๏บ๏บ ~ ๏ช๏ช0
โ1๏บ๏ป ๏ช๏ซ0
0
1
0
0
0
1
3
1
โ2
10
9
โ5
8
3
โ5
โ3
โ6
7
5๏น
1๏บ๏บ
โ5๏บ๏ป
โ1๏น
10๏บ๏บ . Thus, Aโ1B =
โ3๏บ๏ป
๏ฉ 1 2๏น ๏ฉ1 3๏น
22. By definition of matrix multiplication, the matrix A satisfies A ๏ช
๏บ=๏ช
๏บ.
๏ซ3 7๏ป ๏ซ1 1๏ป
๏ฉ1
Right-multiply both sides by the inverse of ๏ช
๏ซ3
1๏น
๏ฉ1 3๏น ๏ฉ 7 โ2๏น ๏ฉ โ2
A= ๏ช
=๏ช
.
๏บ
๏ช
๏บ
1๏ป ๏ซ 4 โ1๏บ๏ป
๏ซ1 1๏ป ๏ซโ3
2๏น
. The left side becomes A. Thus,
7 ๏บ๏ป
๏ฉ 5 4๏น
๏ฉ7 3๏น
โ1
and B = ๏ช
23. Given AB = ๏ช
๏บ
๏บ , notice that ABB = A. Since det B = 7 โ 6 =1,
๏ซโ2 3๏ป
๏ซ2 1๏ป
๏ฉ 1 โ3๏น
๏ฉ 5 4๏น ๏ฉ 1 โ3๏น ๏ฉ โ3 13๏น
Bโ1 = ๏ช
and A = ( AB) Bโ1 = ๏ช
=
๏บ
๏บ๏ช
7๏ป
7๏บ๏ป ๏ช๏ซโ8 27๏บ๏ป
๏ซโ2
๏ซโ2 3๏ป ๏ซโ2
Note: Variants of this question make simple exam questions.
24. Since A is invertible, so is AT, by the Invertible Matrix Theorem. Then ATA is the product of
invertible matrices and so is invertible. Thus, the formula (ATA)โ1AT makes sense. By Theorem 6 in
Section 2.2, (ATA)โ1โ
AT = Aโ1(AT)โ1AT = Aโ1I = Aโ1
An alternative calculation: (ATA)โ1ATโ
A = (ATA)โ1(ATA) = I. Since A is invertible, this equation shows
that its inverse is (ATA)โ1AT.
๏ฉ c0 ๏น
๏ ๏บ๏บ = row i (V )c .
๏ช๏ซcnโ1 ๏บ๏ป
๏ช
25. a. For i = 1,โฆ, n, p(xi) = c0 + c1xi + โ
โ
โ
+ cnโ1xinโ1 = row i (V ) โ
๏ช
Copyright ยฉ 2021 Pearson Education, Inc.
Chapter 2 – Supplementary Exercises
2-75
By a property of matrix multiplication, shown after Example 6 in Section 2.1, and the fact that c
was chosen to satisfy Vc= y, row i (V ) c = row i (V c ) = row i ( y ) = yi
Thus, p(xi) = yi. To summarize, the entries in Vc are the values of the polynomial p(x) at x1, โฆ, xn.
b. Suppose x1, โฆ, xn are distinct, and suppose Vc = 0 for some vector c. Then the entries in c are the
coefficients of a polynomial whose value is zero at the distinct points x1, …, xn. However, a
nonzero polynomial of degree n โ 1 cannot have n zeros, so the polynomial must be identically
zero. That is, the entries in c must all be zero. This shows that the columns of V are linearly
independent.
c. (Solution in Study Guide) When x1, โฆ, xn are distinct, the columns of V are linearly independent,
by (b). By the Invertible Matrix Theorem, V is invertible and its columns span ๏ n . So, for every
y = (y1, โฆ, yn) in ๏ n , there is a vector c such that Vc = y. Let p be the polynomial whose
coefficients are listed in c. Then, by (a), p is an interpolating polynomial for (x1, y1), โฆ, (xn, yn).
26. If A = LU, then col1(A) = Lโ
col1(U). Since col1(U) has a zero in every entry except possibly the first,
Lโ
col1(U) is a linear combination of the columns of L in which all weights except possibly the first
are zero. So col1(A) is a multiple of col1(L).
Similarly, col2(A) = Lโ
col2(U), which is a linear combination of the columns of L using the first two
entries in col2(U) as weights, because the other entries in col2(U) are zero. Thus col2(A) is a linear
combination of the first two columns of L.
27. a. P2 = (uuT)(uuT) = u(uTu)uT = u(1)uT = P, because u satisfies uTu = 1.
b. PT = (uuT)T = uTTuT = uuT = P
c. Q2 = (I โ 2P)(I โ 2P) = I โ I(2P) โ 2PI + 2P(2P)
= I โ 4P + 4P2 = I, because of part (a).
๏ฉ0 ๏น
28. Given u = ๏ช 0 ๏บ , define P and Q as in Exercise 27 by
๏ช ๏บ
๏ซ๏ช 1 ๏ป๏บ
๏ฉ0 ๏น
P = uu = ๏ช๏ช 0 ๏บ๏บ [ 0
๏ช๏ซ1 ๏บ๏ป
T
0
๏ฉ0
1] = ๏ช๏ช 0
๏ช๏ซ 0
๏ฉ0
๏ฉ1 ๏น
๏ช
๏บ
If x = 5 , then Px = ๏ช 0
๏ช ๏บ
๏ช
๏ช๏ซ 3 ๏บ๏ป
๏ช๏ซ 0
0
0
0
0
0
0
0๏น
๏ฉ1
๏บ
0 ๏บ , Q = I โ 2 P = ๏ช๏ช0
๏ช๏ซ0
1 ๏บ๏ป
0 ๏น ๏ฉ1 ๏น ๏ฉ 0 ๏น
0 ๏บ๏บ ๏ช๏ช5 ๏บ๏บ = ๏ช๏ช0 ๏บ๏บ
1 ๏บ๏ป ๏ช๏ซ3๏บ๏ป ๏ช๏ซ 3 ๏บ๏ป
๏ฉ1
and Qx = ๏ช๏ช0
๏ช๏ซ0
0๏น
๏ฉ0
๏บ
0 ๏บ โ 2 ๏ช๏ช 0
๏ช๏ซ 0
1 ๏บ๏ป
0
1
0
0
1
0
0
0
0
0 ๏น ๏ฉ1
0 ๏บ๏บ = ๏ช๏ช 0
1 ๏บ๏ป ๏ช๏ซ 0
0
1
0
0๏น
0 ๏บ๏บ
โ1๏บ๏ป
0 ๏น ๏ฉ 1๏น ๏ฉ 1๏น
0 ๏บ๏บ ๏ช๏ช5 ๏บ๏บ = ๏ช๏ช 5๏บ๏บ .
โ1๏บ๏ป ๏ช๏ซ3๏บ๏ป ๏ช๏ซ โ3๏บ๏ป
29. Left-multiplication by an elementary matrix produces an elementary row operation:
B ~ E1 B ~ E 2 E1 B ~ E 3 E 2 E1 B = C , so B is row equivalent to C. Since row operations are reversible, C
is row equivalent to B. (Alternatively, show C being changed into B by row operations using the
inverse of the Ei .)
30. Since A is not invertible, there is a nonzero vector v in ๏ n such that Av = 0. Place n copies of v into
an nรn matrix B. Then AB = A[v โ
โ
โ
v] = [Av โ
โ
โ
Av] = 0.
Copyright ยฉ 2021 Pearson Education, Inc.
2-76
Chapter 2
Matrix Algebra
๏ฌ
31. Let A be a 6ร4 matrix and B a 4ร6 matrix. Since B has more columns than rows, its six columns are
linearly dependent and there is a nonzero x such that Bx = 0. Thus ABx = A0 = 0. This shows that the
matrix AB is not invertible, by the IMT. (Basically the same argument was used to solve Exercise 30
in Section 2.1.)
Note: (In the Study Guide) It is possible that BA is invertible. For example, let C be an invertible 4ร4
๏ฉC ๏น
โ1
matrix and construct A = ๏ช ๏บ and B = [C 0]. Then BA = I4, which is invertible.
0
๏ซ ๏ป
32. By hypothesis, A is 5ร3, C is 3ร5, and AC = I3. Suppose x satisfies Ax = b. Then CAx = Cb. Since
CA = I, x must be Cb. This shows that Cb is the only solution of Ax = b.
๏ฉ.4 .2 .3๏น
๏ฉ .31
33. Let A = ๏ช .3 .6 .3๏บ . Then A2 = ๏ช.39
๏ช
๏บ
๏ช
๏ช๏ซ .3 .2 .4 ๏บ๏ป
๏ช๏ซ.30
calculations by computing
๏ฉ.2875
A = A A = ๏ช๏ช .4251
๏ซ๏ช.2874
4
2
.2834
.4332
.2834
2
.2874 ๏น
.4251๏บ๏บ ,
.2875 ๏บ๏ป
.26
.48
.26
.30 ๏น
.39 ๏บ๏บ . Instead of computing A3 next, speed up the
.31๏บ๏ป
๏ฉ.2857
A = A A = ๏ช๏ช.4285
๏ซ๏ช.2857
8
4
4
๏ฉ.2857
To four decimal places, as k increases, A โ ๏ช.4286
๏ช
๏ช๏ซ.2857
k
๏ฉ2 / 7
A โ ๏ช๏ช 3/ 7
๏ช๏ซ 2 / 7
๏ฉ0
If B = ๏ช .1
๏ช
๏ซ๏ช.9
.3๏น
.3๏บ๏บ ,
.4 ๏บ๏ป
.2
.6
.2
.2857 ๏น
.4286 ๏บ๏บ , or, in rational format,
.2857 ๏บ๏ป
2 / 7๏น
3/ 7 ๏บ๏บ .
2 / 7 ๏บ๏ป
2/7
3/ 7
2/7
k
.2857
.4286
.2857
.2857 ๏น
.4285๏บ๏บ
.2857 ๏บ๏ป
.2857
.4286
.2857
then
๏ฉ.29
B = ๏ช๏ช.33
๏ซ๏ช.38
2
.18
.44
.38
๏ฉ.2119
.18 ๏น
4
๏ช
๏บ
.33๏บ , B = ๏ช .3663
.49 ๏บ๏ป
๏ซ๏ช.4218
.1998
.3764
.4218
๏ฉ.2024
B = ๏ช.3707
๏ช
๏ซ๏ช.4269
.2022
.3709
.4269
.2022๏น
.3707๏บ . To four decimal places, as k increases,
๏บ
.4271๏บ๏ป
๏ฉ.2022
B โ ๏ช๏ช.3708
๏ช๏ซ.4270
.2022
.3708
.4270
.2022 ๏น
๏ฉ18 / 89
๏บ
k
.3708๏บ , or, in rational format, B โ ๏ช๏ช 33/ 89
๏ช๏ซ38 / 89
.4270 ๏บ๏ป
8
k
18 / 89
33/ 89
38 / 89
.1998๏น
.3663๏บ ,
๏บ
.4339 ๏บ๏ป
18 / 89 ๏น
33/ 89 ๏บ๏บ .
38 / 89 ๏บ๏ป
34. The 4ร4 matrix A4 is the 4ร4 matrix of ones, minus the 4ร4 identity matrix. The MATLAB
command is A4 = ones(4) โ eye(4). For the inverse, use inv(A4).
๏ฉ0
๏ช1
A4 = ๏ช
๏ช1
๏ช
๏ซ1
1
0
1
1
1
1
0
1
1๏น
1๏บ
๏บ,
1๏บ
๏บ
0๏ป
๏ฉ โ2 / 3
๏ช 1/ 3
A4โ1 = ๏ช
๏ช 1/ 3
๏ช
๏ซ 1/ 3
1/ 3
โ2 / 3
1/ 3
1/ 3
1/ 3
1/ 3
โ2 / 3
1/ 3
1/ 3๏น
1/ 3๏บ
๏บ
1/ 3๏บ
๏บ
โ2 / 3 ๏ป
Copyright ยฉ 2021 Pearson Education, Inc.
Chapter 2 – Supplementary Exercises
๏ฉ0
๏ช1
๏ช
A5 = ๏ช 1
๏ช
๏ช1
๏ช๏ซ 1
1
1
1
0
1
1
0
1
1
1
1
0
1
1
1
๏ฉ0
๏ช1
๏ช
๏ช1
A6 = ๏ช
๏ช1
๏ช1
๏ช
๏ซ๏ช 1
1
0
1
1
1
1
1
1
0
1
1
1
1๏น
1๏บ๏บ
1๏บ ,
๏บ
1๏บ
0 ๏บ๏ป
๏ฉ โ3/ 4
๏ช 1/ 4
๏ช
A5โ1 = ๏ช 1/ 4
๏ช
๏ช 1/ 4
๏ช๏ซ 1/ 4
1
1
1
0
1
1
1
1
1
1
0
1
1๏น
1๏บ
๏บ
1๏บ
๏บ,
1๏บ
1๏บ
๏บ
0 ๏ป๏บ
1/ 4
1/ 4
1/ 4
โ3/ 4
1/ 4
1/ 4
โ3/ 4
1/ 4
1/ 4
1/ 4
1/ 4
โ3/ 4
1/ 4
1/ 4
1/ 4
1/ 4 ๏น
1/ 4 ๏บ๏บ
1/ 4 ๏บ
๏บ
1/ 4 ๏บ
โ3/ 4 ๏บ๏ป
๏ฉ โ4 / 5
๏ช 1/ 5
๏ช
๏ช 1/ 5
โ1
A6 = ๏ช
๏ช 1/ 5
๏ช 1/ 5
๏ช
๏ซ๏ช 1/ 5
1/ 5
โ4 / 5
1/ 5
1/ 5
1/ 5
1/ 5
1/ 5
1/ 5
โ4 / 5
1/ 5
1/ 5
1/ 5
1/ 5
1/ 5
1/ 5
โ4 / 5
1/ 5
1/ 5
1/ 5
1/ 5
1/ 5
1/ 5
โ4 / 5
1/ 5
2-77
1/ 5๏น
1/ 5๏บ
๏บ
1/ 5๏บ
๏บ
1/ 5๏บ
1/ 5๏บ
๏บ
โ4 / 5๏ป๏บ
The construction of A6 and the appearance of its inverse suggest that the inverse is related to I6. In
fact, A6โ1 + I 6 is 1/5 times the 6ร6 matrix of ones. Let J denotes the nรn matrix of ones. The
conjecture is:
1
โ
J โ In
An = J โ In and Anโ1 =
n โ1
Proof: (Not required) Observe that J 2 = nJ and An J = (J โ I ) J = J 2 โ J = (n โ 1) J. Now compute
An((n โ 1)โ1J โ I) = (n โ 1)โ1 An J โ An = J โ (J โ I) = I. Since An is square, An is invertible and its
inverse is (n โ 1)โ1J โ I.
Copyright ยฉ 2021 Pearson Education, Inc.
Document Preview (77 of 485 Pages)
User generated content is uploaded by users for the purposes of learning and should be used following SchloarOn's honor code & terms of service.
You are viewing preview pages of the document. Purchase to get full access instantly.
-37%
Solution Manual for Linear Algebra, 6th Edition
$18.99 $29.99Save:$11.00(37%)
24/7 Live Chat
Instant Download
100% Confidential
Store
Henry Lewis
0 (0 Reviews)
Best Selling
The World Of Customer Service, 3rd Edition Test Bank
$18.99 $29.99Save:$11.00(37%)
Chemistry: Principles And Reactions, 7th Edition Test Bank
$18.99 $29.99Save:$11.00(37%)
Data Structures and Other Objects Using C++ 4th Edition Solution Manual
$18.99 $29.99Save:$11.00(37%)
Solution Manual for Designing the User Interface: Strategies for Effective Human-Computer Interaction, 6th Edition
$18.99 $29.99Save:$11.00(37%)
Test Bank for Strategies For Reading Assessment And Instruction: Helping Every Child Succeed, 6th Edition
$18.99 $29.99Save:$11.00(37%)
2023-2024 ATI Pediatrics Proctored Exam with Answers (139 Solved Questions)
$18.99 $29.99Save:$11.00(37%)