Solution Manual for Linear Algebra with Applications, 5th Edition
Preview Extract
Chapter 2
Chapter 2
Section 2.1
2.1.1 Not a linear transformation, since y2 = x2 + 2 is not linear in our sense.
๏ฃฎ
0
2.1.2 Linear, with matrix ๏ฃฐ 0
1
๏ฃน
2 0
0 3๏ฃป
0 0
2.1.3 Not linear, since y2 = x1 x3 is nonlinear.
๏ฃฎ
๏ฃน
9
3 โ3
1๏ฃบ
๏ฃฏ 2 โ9
2.1.4 A = ๏ฃฐ
๏ฃป
4 โ9 โ2
5
1
5
2.1.5 By Theorem 2.1.2, the three columns of the 2 ร 3 matrix A are T (~e1 ), T (~e2 ), and T (~e3 ), so that
7 6 โ13
.
A=
11 9
17
๏ฃฎ ๏ฃน
๏ฃฎ ๏ฃน ๏ฃฎ
1
4
1
2.1.6 Note that x1 ๏ฃฐ 2 ๏ฃป + x2 ๏ฃฐ 5 ๏ฃป = ๏ฃฐ 2
3
6
3
๏ฃน
๏ฃฎ
๏ฃน
4
1 4
x
5 ๏ฃป 1 , so that T is indeed linear, with matrix ๏ฃฐ 2 5 ๏ฃป.
x2
6
3 6
๏ฃน
x1
2.1.7 Note that x1~v1 + ยท ยท ยท + xm~vm = [~v1 . . . ~vm ] ๏ฃฐ ยท ยท ยท ๏ฃป, so that T is indeed linear, with matrix [~v1 ~v2 ยท ยท ยท ~vm ].
xm
๏ฃฎ
2.1.8 Reducing the system
x1 + 7×2
3×1 + 20×2
x1
= y1
, we obtain
= y2
y1
y2
2.1.9 We have to attempt to solve the equation
x1 + 1.5×2
2×1 + 3×2 = y1
we obtain
0
6×1 + 9×2 =
y2
=
=
=
x2
2 3
6 9
= 0.5y1
= โ3y1 + y2
x1
x2
.
โ20y1
3y1
+
โ
7y2
.
y2
for x1 and x2 . Reducing the system
No unique solution (x1 , x2 ) can be found for a given (y1 , y2 ); the matrix is noninvertible.
y1
y2
2.1.10 We have to attempt to solve the equation
x1 + 2×2 = y1
x1
we find that
4×1 + 9×2 =
y2
x2
9 โ2
.
The inverse matrix is
โ4
1
1
4
2
9
9y1
โ4y1
+
+
=
=
=
x1
x2
for x1 and x2 . Reducing the system
2y2
x1
9 โ2
y1
or
=
.
y2
x2
y2
โ4
1
50
Copyright c 2013 Pearson Education, Inc.
Section 2.1
y1
y2
2.1.11 We have to attempt to solve the equation
x1
x1 + 2×2 = y1
we find that
3×1 + 9×2 =
y2
x2
2.1.12 Reducing the system
1 โk
.
0
1
x1 + kx2
x2
= y1
= y2
x1
1 2
=
for x1 and x2 . Reducing the system
3 9
x2
3 โ 32
= 3y1 โ 23 y2
. The inverse matrix is
.
1
โ1
= โy1 + 31 y2
3
we find that
x1
x2
=
=
2.1.13 a First suppose that a 6= 0. We have to attempt to solve the equation
ax1
cx1
+
+
x1
+
x1
+
bx2
dx2
=
=
y1
y2
รทa
b
a x2
(d โ bc
a )x2
=
=
1
a y1
โ ac y1
b
a x2
( adโbc
a )x2
=
1
a y1
โ ac y1
=
โ
+
+
x1 +
cx1 +
โ
y2
b
a x2
dx2
=
=
1
a y1
y2
y1
โ
ky2
. The inverse matrix is
y2
y1
y2
โc(I)
a
=
c
b
d
x1
x2
for x1 and x2 .
โ
y2
We can solve this system for x1 and x2 if (and only if) ad โ bc 6= 0, as claimed.
If a = 0, then we have to consider the system
bx2 = y1
cx1
swap : I โ II
cx1 + dx2 =
y2
+
dx2
bx2
=
=
y2
y1
We can solve for x1 and x2 provided that both b and c are nonzero, that is if bc 6= 0. Since a = 0, this means
that ad โ bc 6= 0, as claimed.
b First suppose that ad โ bc 6= 0 and a 6= 0. Let D = ad โ bc for simplicity. We continue our work in part (a):
1
x1 + ab x2 =
a y1
a โ
D
c
ยทD
a x 2 = โ a y1 + y2
b
1
โ a (II)
x1 + ab x2 =
a y1
โ
a
x2 = โ Dc y1 + D
y2
bc
x1
= ( a1 + aD
)y1 โ Db y2
a
x2 =
โ Dc y1
+ D
y2
d
โ Db y2
x1
=
D y1
c
a
x 2 = โ D y1 + D
y2
bc
ad
d
Note that a1 + aD
= D+bc
aD = aD = D .
It follows that
system
a
c
b
d
โ1
1
= adโbc
d
โc
โb
, as claimed. If ad โ bc 6= 0 and a = 0, then we have to solve the
a
51
Copyright c 2013 Pearson Education, Inc.
Chapter 2
cx1 +
dx2
bx2
= y2
= y1
รทc
รทb
x1 + dc x2 = 1c y2 โ dc (II)
x2 = 1b y1
d
x1
= โ bc
y1 + 1c y2
1
x2 =
b y1
โ1 d
โ bc
a b
It follows that
=
1
c d
b
2 3
5 k
2 3
5 k
โ1
2.1.14 a By Exercise 13a,
b By Exercise 13b,
1
c
0
1
= adโbc
d
โc
โb
a
(recall that a = 0), as claimed.
is invertible if (and only if) 2k โ 15 6= 0, or k 6= 7.5.
1
= 2kโ15
k
โ5
โ3
.
2
3
If all entries of this inverse are integers, then 2kโ15
2
1
โ 2kโ15
= 2kโ15
is a (nonzero) integer n, so that 2k โ15 = n1
1
k
or k = 7.5 + 2n
. Since 2kโ15
= kn = 7.5n + 21 is an integer as well, n must be odd.
1
, where n is an odd integer. The
We have shown: If all entries of the inverse are integers, then k = 7.5 + 2n
โ1
2 3
converse is true as well: If k is chosen in this way, then the entries of
will be integers.
5 k
a โb
is invertible if (and only if) a2 + b2 6= 0, which is the case unless
2.1.15 By Exercise 13a, the matrix
b
a
a โb
a b
1
is invertible, then its inverse is a2 +b2
a = b = 0. If
, by Exercise 13b.
b
a
โb a
3 0
, then A~x = 3~x for all ~x in R2 , so that A represents a scaling by a factor of 3. Its inverse is a
2.1.16 If A =
0 3
1
0
scaling by a factor of 13 : Aโ1 = 3 1 . (See Figure 2.1.)
0 3
โ1
2.1.17 If A =
0
0
, then A~x = โ~x for all ~x in R2 , so that A represents a reflection about the origin.
โ1
This transformation is its own inverse: Aโ1 = A. (See Figure 2.2.)
2.1.18 Compare with Exercise 16: This matrix represents a scaling by the factor of 12 ; the inverse is a scaling by 2.
(See Figure 2.3.)
x1
x
1 0
, so that A represents the orthogonal projection onto the ~e1 axis. (See
, then A 1 =
0
0 0
x2
1
Figure 2.1.) This transformation is not invertible, since the equation A~x =
has infinitely many solutions ~x.
0
(See Figure 2.4.)
2.1.19 If A =
52
Copyright c 2013 Pearson Education, Inc.
Section 2.1
ยท ยธ
0
6
ยท ยธ
0
2
ยท
3 0
0 3
ยธ
ยท ยธ
1
0
ยท ยธ
3
0
Figure 2.1: for Problem 2.1.16.
ยท ยธ
0
2
ยท
โ1
0
0 โ1
ยธ
ยท
โ1
0
ยธ
ยท ยธ
1
0
ยท
0
โ2
ยธ
Figure 2.2: for Problem 2.1.17.
ยท ยธ
0
2
ยท
0.5
0
0 0.5
ยท ยธ
0
1
ยธ
ยท
ยท ยธ
1
0
0.5
0
ยธ
Figure 2.3: for Problem 2.1.18.
2.1.20 If A =
0 1
1 0
x
, then A 1
x2
=
x2
, so that A represents the reflection about the line x2 = x1 . This
x1
transformation is its own inverse: Aโ1 = A. (See Figure 2.5.)
2.1.21 Compare with Example 5.
x1
0 1
x2
, then A
If A =
=
. Note that the vectors ~x and A~x are perpendicular and have the same
โ1 0
x2
โx1
53
Copyright c 2013 Pearson Education, Inc.
Chapter 2
ยท ยธ
0
2
ยท
1 0
0 0
ยธ
ยท ยธ
0
0
ยท ยธ
1
0
ยท ยธ
1
0
Figure 2.4: for Problem 2.1.19.
ยท ยธ
0
2
ยท
0 1
1 0
ยธ
ยท ยธ
0
1
ยท ยธ
1
0
ยท ยธ
2
0
Figure 2.5: for Problem 2.1.20.
length. If ~x is in the first quadrant, then A~x is in the fourth. Therefore, A represents the rotation through an
0 โ1
โฆ
โ1
angle of 90 in the clockwise direction. (See Figure 2.6.) The inverse A =
represents the rotation
1
0
โฆ
through 90 in the counterclockwise direction.
ยท ยธ
0
2
ยท
ยท ยธ
2
0
ยธ
0 1
โ1 0
ยท
ยท ยธ
1
0
0
โ1
ยธ
Figure 2.6: for Problem 2.1.21.
2.1.22 If A =
1
0
0 โ1
, then A
x1
x2
=
x1
, so that A represents the reflection about the ~e1 axis. This
โx2
transformation is its own inverse: Aโ1 = A. (See Figure 2.7.)
2.1.23 Compare with Exercise 21.
0 1
, so that A represents a rotation through an angle of 90โฆ in the clockwise direction,
โ1 0
followed by a scaling by the factor of 2.
Note that A = 2
54
Copyright c 2013 Pearson Education, Inc.
Section 2.1
ยท ยธ
0
2
ยท
1
0
0 โ1
ยท ยธ
1
0
ยธ
ยท ยธ
1
0
ยท
0
โ2
ยธ
Figure 2.7: for Problem 2.1.22.
โ1
The inverse A
=
0
1
2
โ 12
0
represents a rotation through an angle of 90โฆ in the counterclockwise direction,
followed by a scaling by the factor of 12 . (See Figure 2.8.)
ยท ยธ
0
2
ยท
ยท ยธ
1
0
0 2
โ2 0
ยท ยธ
4
0
ยธ
ยท
0
โ2
ยธ
Figure 2.8: for Problem 2.1.23.
2.1.24 Compare with Example 5. (See Figure 2.9.)
Figure 2.9: for Problem 2.1.24.
2.1.25 The matrix represents a scaling by the factor of 2. (See Figure 2.10.)
2.1.26 This matrix represents a reflection about the line x2 = x1 . (See Figure 2.11.)
2.1.27 This matrix represents a reflection about the ~e1 axis. (See Figure 2.12.)
55
Copyright c 2013 Pearson Education, Inc.
Chapter 2
Figure 2.10: for Problem 2.1.25.
Figure 2.11: for Problem 2.1.26.
Figure 2.12: for Problem 2.1.27.
1 0
x1
x1
2.1.28 If A =
, then A
=
, so that the x2 component is multiplied by 2, while the x1 component
x2
0 2
2×2
remains unchanged. (See Figure 2.13.)
Figure 2.13: for Problem 2.1.28.
56
Copyright c 2013 Pearson Education, Inc.
Section 2.1
2.1.29 This matrix represents a reflection about the origin. Compare with Exercise 17. (See Figure 2.14.)
Figure 2.14: for Problem 2.1.29.
2.1.30 If A =
2.15.)
0
0
0
x
0
, then A 1 =
, so that A represents the projection onto the ~e2 axis. (See Figure
x2
1
x2
Figure 2.15: for Problem 2.1.30.
2.1.31 The image must be reflected about the ~e2 axis, that is
x1
x2
must be transformed into
โ1 0
be accomplished by means of the linear transformation T (~x) =
~x.
0 1
๏ฃฎ
3
๏ฃฏ0
2.1.32 Using Theorem 2.1.2, we find A = ๏ฃฏ
๏ฃฐ …
0
else.
0
3
..
.
ยท
ยท
..
.
0 ยทยทยท
โx1
: This can
x2
๏ฃน
0
0๏ฃบ
. This matrix has 3โs on the diagonal and 0โs everywhere
.. ๏ฃบ
.๏ฃป
3
1
0
2.1.33 By Theorem 2.1.2, A = T
T
. (See Figure 2.16.)
0
1
๏ฃฎ
๏ฃน
โ1
โ1
โ
2
2 ๏ฃป
Therefore, A = ๏ฃฐ 1
.
1
โ
2
โ
2
2.1.34 As in Exercise 2.1.33, we find T (~e1 ) and T (~e2 ); then by Theorem 2.1.2, A = [T (~e1 ) T (~e2 )]. (See Figure
2.17.)
57
Copyright c 2013 Pearson Education, Inc.
Chapter 2
Figure 2.16: for Problem 2.1.33.
Figure 2.17: for Problem 2.1.34.
Therefore, A =
cos ฮธ
sin ฮธ
โ sin ฮธ
.
cos ฮธ
88
6
89
5
a b
. This amounts to
=
and A
=
such that A
2.1.35 We want to find a matrix A =
53
41
52
42
c d
๏ฃน
๏ฃฎ
5a + 42b
= 89
= 88 ๏ฃบ
๏ฃฏ 6a + 41b
solving the system ๏ฃฐ
๏ฃป.
5c + 42d = 52
6c + 41d = 53
(Here we really have two systems with two unknowns each.)
1 2
.
The unique solution is a = 1, b = 2, c = 2, and d = 1, so that A =
2 1
2.1.36 First we draw w
~ in terms of ~v1 and ~v2 so that w
~ = c1~v1 + c2~v2 for some c1 and c2 . Then, we scale the
~v2 -component by 3, so our new vector equals c1~v1 + 3c2~v2 .
2.1.37 Since ~x = ~v + k(w
~ โ ~v ), we have T (~x) = T (~v + k(w
~ โ ~v )) = T (~v ) + k(T (w)
~ โ T (~v )), by Theorem 2.1.3
Since k is between 0 and 1, the tip of this vector T (~x) is on the line segment connecting the tips of T (~v ) and
T (w).
~ (See Figure 2.18.)
58
Copyright c 2013 Pearson Education, Inc.
Section 2.1
Figure 2.18: for Problem 2.1.37.
2.1.38 T
2
= [~v1
โ1
~v2 ]
2
= 2~v1 โ ~v2 = 2~v1 + (โ~v2 ). (See Figure 2.19.)
โ1
Figure 2.19: for Problem 2.1.38.
๏ฃน๏ฃฎ
๏ฃน
๏ฃน ๏ฃฎ
x1
x1
๏ฃฏ T (~e1 ) . . . T (~em ) ๏ฃบ ๏ฃฐ
2.1.39 By Theorem 2.1.2, we have T ๏ฃฐ . . . ๏ฃป = ๏ฃฐ
๏ฃป . . . ๏ฃป = x1 T (~e1 ) + ยท ยท ยท + xm T (~em ).
xm
xm
๏ฃฎ
2.1.40 These linear transformations are of the form [y] = [a][x], or y = ax. The graph of such a function is a line
through the origin.
2.1.41 These linear transformations are of the form [y] = [a b]
is a plane through the origin.
x1
, or y = ax1 + bx2 . The graph of such a function
x2
2.1.42 a See Figure 2.20.
๏ฃฎ ๏ฃน
1
0
1 ๏ฃป
๏ฃฐ
b The image of the point 2 is the origin,
.
0
1
2
59
Copyright c 2013 Pearson Education, Inc.
Chapter 2
Figure 2.20: for Problem 2.1.42.
๏ฃน
#
” 1
x1
โ 2 x1 + x2
=0
0
๏ฃฐ x2 ๏ฃป =
. (See Figure 2.16.)
c Solve the equation
, or
0
โ 21 0 1
x3 = 0
โ 21 x1 +
x3
๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน
2t
x1
The solutions are of the form ๏ฃฐ x2 ๏ฃป = ๏ฃฐ t ๏ฃป , where t is an arbitrary real number. For example, for t = 12 , we
t
x3
๏ฃฎ ๏ฃน
1
find the point ๏ฃฐ 12 ๏ฃป considered in part b.These points are on the line through the origin and the observerโs eye.
โ 21
1
0
๏ฃฎ
1
2
๏ฃฎ ๏ฃน
๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน
x1
2
x1
2.1.43 a T (~x) = ๏ฃฐ 3 ๏ฃป ยท ๏ฃฐ x2 ๏ฃป = 2×1 + 3×2 + 4×3 = [2 3 4] ๏ฃฐ x2 ๏ฃป
x3
4
x3
The transformation is indeed linear, with matrix [2 3 4].
๏ฃน
v1
b If ~v = ๏ฃฐ v2 ๏ฃป, then T is linear with matrix [v1 v2 v3 ], as in part (a).
v3
๏ฃฎ
๏ฃฎ ๏ฃน
๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน
๏ฃฎ ๏ฃน
๏ฃน
a
a
x1
x1
x1
c Let [a b c] be the matrix of T . Then T ๏ฃฐ x2 ๏ฃป = [a b c] ๏ฃฐ x2 ๏ฃป = ax1 + bx2 + cx3 = ๏ฃฐ b ๏ฃป ยท ๏ฃฐ x2 ๏ฃป, so that ~v = ๏ฃฐ b ๏ฃป
c
c
x3
x3
x3
does the job.
๏ฃฎ
๏ฃน ๏ฃฎ
๏ฃน ๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน ๏ฃฎ
0
v 2 x3 โ v 3 x2
x1
v1
x1
2.1.44 T ๏ฃฐ x2 ๏ฃป = ๏ฃฐ v2 ๏ฃป ร ๏ฃฐ x2 ๏ฃป = ๏ฃฐ v3 x1 โ v1 x3 ๏ฃป = ๏ฃฐ v3
โv2
v 1 x2 โ v 2 x1
x3
v3๏ฃน
x3
๏ฃฎ
0 โv3
v2
๏ฃฐ v3
0 โv1 ๏ฃป.
โv2
v1
0
๏ฃฎ
โv3
0
v1
๏ฃน๏ฃฎ ๏ฃน
x1
v2
โv1 ๏ฃป ๏ฃฐ x2 ๏ฃป, so that T is linear, with matrix
x3
0
60
Copyright c 2013 Pearson Education, Inc.
Section 2.1
2.1.45 Yes, ~z = L(T (~x)) is also linear, which we will verify using Theorem 2.1.3. Part a holds, since L(T (~v + w))
~ =
L(T (~v ) + T (w))
~ = L(T (~v )) + L(T (w)),
~
and part b also works, because L(T (k~v )) = L(kT (~v )) = kL(T (~v )).
pa + qc
a
1
1
=
=B
=B A
ra + sc
c
0
0
0
0
b
pb + qd
T
=B A
=B
=
1
1
d
rb + sd
1
0
x1
b
pb + qd
So, T
= x1 T
+ x2 T
=
=
x2
0
1
d
rb + sd
2.1.46 T
2.1.47 Write w
~ as a linear combination of ~v1 and ~v2 : w
~ = c1~v1 + c2~v2 . (See Figure 2.21.)
Figure 2.21: for Problem 2.1.47.
Measurements show that we have roughly w
~ = 1.5~v1 + ~v2 .
Therefore, by linearity, T (w)
~ = T (1.5~v1 + ~v2 ) = 1.5T (~v1 ) + T (~v2 ). (See Figure 2.22.)
Figure 2.22: for Problem 2.1.47.
2.1.48 Let ~x be some vector in R2 . Since ~v1 and ~v2 are not parallel, we can write ~x in terms of components of ~v1
and ~v2 . So, let c1 and c2 be scalars such that ~x = c1~v1 + c2~v2 . Then, by Theorem 2.1.3, T (~x) = T (c1~v1 + c2~v2 ) =
T (c1~v1 ) + T (c2~v2 ) = c1 T (~v1 ) + c2 T (~v2 ) = c1 L(~v1 ) + c2 L(~v2 ) = L(c1~v1 + c2~v2 ) = L(~x). So T (~x) = L(~x) for all ~x
in R2 .
Pn
2.1.49 Denote the components of ~x with xj and the entries of A with aij . We are told that j=1 xj = 1 and
Pn
P
n
= 1 for all j = 1, …, n. Now the ith component of A~x is j=1 aij xj , so that the sum of all components
i=1 aij P
Pn Pn
P n Pn
Pn
n Pn
of A~x is i=1 j=1 aij xj = j=1 i=1 aij xj = j=1 ( i=1 aij ) xj = j=1 xj = 1, as claimed.
61
Copyright c 2013 Pearson Education, Inc.
Chapter 2
Also, the components of A~x are nonnegative since all the scalars aij and xj are nonnegative. Therefore, A~x is a
distribution vector.
2.1.50 Proceeding as in Exercise 51, we find
๏ฃฎ
๏ฃน
๏ฃฎ
๏ฃน
4
0
1
0
0
๏ฃฏ
๏ฃบ
๏ฃฏ 1/2 0 1/2 1 ๏ฃบ
๏ฃบ and ~xequ = 1 ๏ฃฏ 4 ๏ฃบ.
A=๏ฃฏ
11
๏ฃฐ
๏ฃฐ 1/2 0
๏ฃป
2 ๏ฃป
0
0
1
0
0 1/2 0
Pages 1 and 2 have the highest naive PageRank.
2.1.51 a. We can construct the transition matrix A column by column, as discussed in Example 9:
๏ฃฎ
0
๏ฃฏ 1/2
๏ฃฏ
A=๏ฃฐ
1/2
0
0
0
0
1
1/3
1/3
0
1/3
๏ฃน
0
1/2 ๏ฃบ
๏ฃบ.
1/2 ๏ฃป
0
For example, the first column represents the fact that half of the surfers from page 1 take the link to page 2,
while the other half go to page 3.
b. To find the equilibrium vector, we need to solve the system A~x = ~x = I4 ~x or (A โ I4 )~x = ~0. We use technology
to find
๏ฃฎ
๏ฃน
1 0 0 โ1/5
๏ฃฏ 0 1 0 โ4/5 ๏ฃบ
๏ฃบ
rref(A โ I4 ) = ๏ฃฏ
๏ฃฐ 0 0 1 โ3/5 ๏ฃป .
0 0 0
0
๏ฃฎ
๏ฃน
t
๏ฃฏ 4t ๏ฃบ
๏ฃบ
The solutions are of the form ~x = ๏ฃฏ
๏ฃฐ 3t ๏ฃป, where t is arbitrary. The distribution vector among these solutions
5t
๏ฃฎ
๏ฃน
1
๏ฃฏ
๏ฃบ
1
1 ๏ฃฏ 4 ๏ฃบ
must satisfy the condition t + 4t + 3t + 5t = 13t = 1, or t = 13
. Thus ~xequ = 13
๏ฃฐ 3 ๏ฃป.
5
c. Page 4 has the highest naive PageRank.
2.1.52 Proceeding as in Exercise 51, we find
๏ฃฎ
๏ฃฎ
๏ฃน
๏ฃน
2
0
1 0
A = ๏ฃฐ 1/2 0 1 ๏ฃป and ~xequ = 15 ๏ฃฐ 2 ๏ฃป.
1
1/2 0 0
Pages 1 and 2 have the highest naive PageRank.
2.1.53 a. Constructing the matrix B column by column, as explained for the second column, we find
๏ฃฎ
0.05
๏ฃฏ 0.45
B=๏ฃฏ
๏ฃฐ 0.45
0.05
0.45
0.05
0.45
0.05
0.05
0.05
0.05
0.85
๏ฃน
0.05
0.85 ๏ฃบ
๏ฃบ
0.05 ๏ฃป
0.05
62
Copyright c 2013 Pearson Education, Inc.
Section 2.1
b. The matrix 0.05E accounts for the jumpers, since 5% of the surfers from a given page jump to any other page
(or stay put). The matrix 0.8A accounts for the 80% of the surfers who follow links.
c. To find the equilibrium vector, we need to solve the system B~x = ~x = I4 ~x or (B โ I4 )~x = ~0. We use technology
to find
๏ฃน
๏ฃฎ
1 0 0 โ5/7
๏ฃฏ 0 1 0 โ9/7 ๏ฃบ
๏ฃบ.
rref(B โ I4 ) = ๏ฃฏ
๏ฃฐ 0 0 1
โ1 ๏ฃป
0 0 0
0
๏ฃฎ
๏ฃน
5t
๏ฃฏ 9t ๏ฃบ
1
๏ฃบ
The solutions are of the form ~x = ๏ฃฏ
๏ฃฐ 7t ๏ฃป, where t is arbitrary. Now ~x is a distribution vector when t = 28 . Thus
7t
๏ฃฎ
๏ฃน
5
๏ฃฏ
๏ฃบ
1 ๏ฃฏ 9 ๏ฃบ
~xequ = 28
๏ฃฐ 7 ๏ฃป. Page 2 has the highest PageRank.
7
2.1.54 a. Here we consider the same mini-Web as in Exercise 50. Using the formula for B from Exercise 53b , we
find
๏ฃฎ
๏ฃน
0.05 0.85 0.05 0.05
๏ฃฏ 0.45 0.05 0.45 0.85 ๏ฃบ
๏ฃบ
B=๏ฃฏ
๏ฃฐ 0.45 0.05 0.05 0.05 ๏ฃป .
0.05 0.05 0.45 0.05
๏ฃฎ
๏ฃน
377
๏ฃฏ
๏ฃบ
1 ๏ฃฏ 401 ๏ฃบ
b. Proceeding as in Exercise 53, we find ~xequ = 1124
๏ฃฐ 207 ๏ฃป.
139
c. Page 2 has the highest PageRank.
2.1.55 Here we consider the same mini-Web as in Exercise 51. Proceeding as in Exercise 53, we find
๏ฃฎ
๏ฃน
๏ฃฎ
๏ฃน
0.05 0.05 19/60 0.05
323
๏ฃฏ 0.45 0.05 19/60 0.45 ๏ฃบ
๏ฃฏ
๏ฃบ
1 ๏ฃฏ 855 ๏ฃบ
๏ฃบ
B=๏ฃฏ
๏ฃฐ 0.45 0.05 0.05 0.45 ๏ฃป and ~xequ = 2860 ๏ฃฐ 675 ๏ฃป.
0.05 0.85 19/60 0.05
1007
Page 4 has the highest PageRank.
2.1.56 Here we consider the same mini-Web as in Exercise 52. Proceeding as in Exercise 53, we find
๏ฃฎ
๏ฃฎ
๏ฃน
๏ฃน
1 13 1
61
1 ๏ฃฐ
1 ๏ฃฐ
7 1 13 ๏ฃป and ~xequ = 159
63 ๏ฃป.
B = 15
7 1
1
35
Page 2 has the highest PageRank
2.1.57 a Let x1 be the number of 2 Franc coins, and x2 be the number of 5 Franc coins. Then
63
Copyright c 2013 Pearson Education, Inc.
2×1
x1
+5×2
+x2
=
=
144
.
51
Chapter 2
37
.
14
From this we easily find our solution vector to be
total value of coins
2×1
b
=
x1
total number of coins
2 5
So, A =
.
1 1
+5×2
+x2
2 5
=
1 1
x1
.
x2
โ1
c By Exercise 13, matrix A is invertible (since ad โ bc = โ3 6= 0), and A
Then โ 13
part a.
1 โ5
โ1 2
144
51
= โ 13
144
โ144
โ5(51)
+2(51)
= โ 31
โ111
โ42
=
1
= adโbc
d
โc
โb
1
1
= โ3
a
โ1
โ5
.
2
37
, which was the vector we found in
14
p
mass of the platinum alloy
=
. Using the definition density = mass/volume, or volume =
s
mass of the silver alloy
mass/density, we can set up the system:
p
+s = 5, 000
, with the solution p = 2, 600 and s = 2, 400. We see that the platinum alloy makes up
p
s
+ 10
= 370
20
only 52 percent of the crown; this gold smith is a crook!
2.1.58 a Let
b We seek the matrix A such that A
p
total mass
1
p+s
=
.
Thus
A
=
= p
1
s
s
total volume
20 + 10
20
1
1
10
.
2 โ20
p
c Yes. By Exercise 13, A
=
. Applied to the case considered in part a, we find that
=
โ1
20
s
2, 600
5, 000
2 โ20
total mass
, confirming our answer in part a.
=
=
Aโ1
2, 400
370
โ1 20
total volume
โ1
5
5
5
C
(F โ 32)
F โ 160
9
= 9
= 9
= 9
1
1
0
1
5
โ 160
9
.
So A = 9
0
1
2.1.59 a
โ 160
9
1
F
.
1
5
b Using Exercise 13, we find 95 (1) โ (โ 160
9 )0 = 9 6= 0, so A is invertible.
9
160
32
9 1
โ1
9
5
=
A =5
. So, F = 59 C + 32.
5
0 1
0
9
300
2.1.60 a A~x =
, meaning that the total value of our money is C$300, or, equivalently, ZAR2400.
2, 400
b From Exercise 13, we test the value ad โ bc
and
find it to be zero. Thus A is not invertible. To determine when
..
A is consistent, we begin to compute rref A.~b :
64
Copyright c 2013 Pearson Education, Inc.
Section 2.1
๏ฃฎ
๏ฃน
๏ฃน
..
..
1
1
. b1 ๏ฃป
.
b
1
8
๏ฃป.
โ๏ฃฐ
..
..
โ8I
8 1 . b2
0 0 . b2 โ 8b1
Thus, the system is consistent only when b2 = 8b1 . This makes sense, since b2 is the total value of our money
in terms of Rand, while b1 is the value in terms of Canadian dollars. Consider the example in part a. If the
system A~x = ~b is consistent, then there will be infinitely many solutions ~x, representing various compositions of
our portfolio in terms of Rand and Canadian dollars, all representing the same total value.
๏ฃฎ
๏ฃฐ1
1
8
2.1.61 All four entries along the diagonal must be 1: they represent the process of converting a currency to itself.
i to currency j is the inverse of
We also know that aij = aโ1
ji for all i and j because converting currency ๏ฃฎ
๏ฃน
1 4/5
โ
5/4
๏ฃฏ 5/4
1
โ
โ ๏ฃบ
๏ฃบ. Next
converting currency j to currency i. This gives us three more entries, A = ๏ฃฏ
๏ฃฐ โ
โ
1
10 ๏ฃป
4/5
โ
1/10
1
letโs find the entry a31 , giving the value of one Euro expressed in Yuan. Now E1 = ยฃ(4/5) and ยฃ1 = U10 so
that E1 = U10(4/5) = U8. We have found that a31 = a34 a41 = 8. Similarly we have aij = aik akj for all indices
i, j, k = 1, 2, 3, 4. This gives a24 = a21 a14 = 25/16 and a23 = a24 a43 = 5/32. Using the fact that aij = aโ1
ji , we
can complete the matrix:
๏ฃฎ
1
๏ฃฏ 5/4
A=๏ฃฏ
๏ฃฐ 8
4/5
4/5
1
32/5
16/25
1/8
5/32
1
1/10
๏ฃน ๏ฃฎ
1
5/4
๏ฃฏ 1.25
25/16 ๏ฃบ
๏ฃบ=๏ฃฏ
10 ๏ฃป ๏ฃฐ 8
0.8
1
0.8
1
6.4
0.64
0.125
0.15625
1
0.1
๏ฃน
1.25
1.5625 ๏ฃบ
๏ฃบ
10 ๏ฃป
1
2.1.62 a 1: this represents converting a currency to itself.
b aij is the reciprocal of aji , meaning that aij aji = 1. This represents converting on currency to another, then
converting it back.
c Note that aik is the conversion factor from currency k to currency i meaning that
(1 unit of currency k) = (aik units of currency i)
Likewise,
(1 unit of currency j) = (akj units of currency k).
It follows that
(1 unit of currency j) = (akj aik units of currency i) = (aij units of currency i), so that aik akj = aij .
d The rank of A is only 1, because every row is simply a scalar multiple of the top row. More precisely, since
aij = ai1 a1j , by part c, the ith row is ai1 times the top row. When we compute the rref, every row but the top
will be removed in the first step. Thus, rref(A) is a matrix with the top row of A and zeroes for all other entries.
2.1.63 a We express the leading variables x1 , x3 , x4 in terms of the free variables x2 , x5 :
x1
x3
x4
=
=
=
x2
โ4×5
x5
2×5
65
Copyright c 2013 Pearson Education, Inc.
Chapter 2
Written in vector form,
๏ฃน ๏ฃฎ
๏ฃน
๏ฃฎ
๏ฃฎ
1 โ4
1
x1
x
2
๏ฃฐ x3 ๏ฃป = ๏ฃฐ 0 1 ๏ฃป
, with B = ๏ฃฐ 0
x5
0 2
0
x4
๏ฃน
โ4
1 ๏ฃป
2
2.1.64 a The given system reduces to
x1
+2×2
x3
+3×4
+4×4
=0
x
or 1
x3
=0
=
=
โ2×2
โ3×4
โ4×4
Written in vector form,
x1
โ2 โ3
x2
โ2 โ3
=
, with B =
x3
x4
0 โ4
0 โ4
Section 2.2
2.2.1 The standard L is transformed into a distorted L whose foot is the vector T
1
3 1
1
3
=
=
.
0
1 2
0
1
0
3 1
0
2
Meanwhile, the back becomes the vector T
=
=
.
2
1 2
2
4
2.2.2 By Theorem 2.2.3, this matrix is
โฆ
cos(60 )
sin(60โฆ )
โฆ
โ sin(60 )
cos(60โฆ )
๏ฃฎ
1
= ๏ฃฐ โ2
3
2
โ
โ 23
1
2
๏ฃน
๏ฃป.
2.2.3 If ~x is in the unit square in R2 , then ~x = x1~e1 + x2~e2 with 0 โค x1 , x2 โค 1, so that
T (~x) = T (x1~e1 + x2~e2 ) = x1 T (~e1 ) + x2 T (~e2 ).
The image of the unit square is a parallelogram in R3 ; two of its sides are T (~e1 ) and T (~e2 ), and the origin is one
of its vertices. (See Figure 2.23.)
Figure 2.23: for Problem 2.2.3.
2.2.4 By Theorem 2.2.4, this is a rotationโcombined with a scaling. The transformation rotates 45 degrees counterclockwise, and has a scaling factor of 2.
2.2.5 Note that cos(ฮธ) = โ0.8, so that ฮธ = arccos(โ0.8) โ 2.498.
66
Copyright c 2013 Pearson Education, Inc.
Section 2.2
๏ฃฎ ๏ฃน
๏ฃฎ ๏ฃน ๏ฃซ ๏ฃฎ ๏ฃน๏ฃถ
2
1
1
2.2.6 By Theorem 2.2.1, projL ๏ฃฐ 1 ๏ฃป = ๏ฃญ~u ยท ๏ฃฐ 1 ๏ฃป๏ฃธ ~u, where ~u is a unit vector on L. To get ~u, we normalize ๏ฃฐ 1 ๏ฃป:
2
1
1
๏ฃฎ ๏ฃน
๏ฃฎ ๏ฃน
๏ฃฎ ๏ฃน ๏ฃฎ 10 ๏ฃน
1
2
2
9
๏ฃฏ ๏ฃบ
~u = 13 ๏ฃฐ 1 ๏ฃป, so that projL ๏ฃฐ 1 ๏ฃป = 53 ยท 31 ๏ฃฐ 1 ๏ฃป = ๏ฃฐ 59 ๏ฃป.
10
1
2
2
9
๏ฃฎ ๏ฃน
๏ฃซ ๏ฃฎ ๏ฃน๏ฃถ
๏ฃฎ ๏ฃน
1
1
1
2.2.7 According to the discussion in the text, refL ๏ฃฐ 1 ๏ฃป = 2 ๏ฃญ~u ยท ๏ฃฐ 1 ๏ฃป๏ฃธ ~u โ ๏ฃฐ 1 ๏ฃป, where ~u is a unit vector on L. To
1
1
1
๏ฃฎ ๏ฃน
๏ฃฎ ๏ฃน
๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน ๏ฃฎ 11 ๏ฃน
๏ฃฎ ๏ฃน
1
1
2
2
2
9
๏ฃฏ ๏ฃบ
get ~u, we normalize ๏ฃฐ 1 ๏ฃป: ~u = 31 ๏ฃฐ 1 ๏ฃป, so that refL ๏ฃฐ 1 ๏ฃป = 2( 53 ) 31 ๏ฃฐ 1 ๏ฃป โ ๏ฃฐ 1 ๏ฃป = ๏ฃฐ 19 ๏ฃป.
11
1
1
2
2
2
9
2.2.8 From Definition 2.2.2, we can see that this is a reflection about the line x1 = โx2 .
2.2.9 By Theorem 2.2.5, this is a vertical shear.
0.8
4
. Then
=
2.2.10 By Theorem 2.2.1, projL ~x = (~u ยท~x)~u, where ~u is a unit vector on L. We can choose ~u = 51
0.6
3
x
0.8
x
0.8
0.8
0.64×1 + 0.48×2
0.64 0.48
x1
projL 1
=
ยท 1
= (0.8×1 + 0.6×2 )
=
=
.
x2
x2
0.6
0.6
0.48×1 + 0.36×2
0.6
x2
0.48 0.36
0.64 0.48
The matrix is A =
.
0.48 0.36
0.64
2.2.11 In Exercise 10 we found the matrix A =
0.48
0.48
0.36
of the projection onto the line L. By Theorem 2.2.2,
0.28
0.96
.
refL ~x = 2(projL ~x) โ ~x = 2A~x โ ~x = (2A โ I2 )~x, so that the matrix of the reflection is 2A โ I2 =
0.96 โ0.28
2.2.12 a. If ~x = ~xll +~xโฅ relative to the line L of reflection, then A~x = ~xll โ~xโฅ and A(A~x) = ~xll โ(โ~xโฅ ) = ~xll +~xโฅ = ~x.
In summary, A(A~x) = ~x.
b. A~v = A(~x + A~x) = A~x + A(A~x) |{z}
= A~x + ~x = ~v . In step 3 we use part a.
Step3
c. Aw
~ = A(~x โ A~x) = A~x โ A(A~x) |{z}
= A~x โ ~x = โw.
~ Again, in step 3 we use part a.
Step3
2
2
d. ~v ยท w
~ = (~x + A~x) ยท (~x โ A~x) = ~x ยท ~x โ (A~x) ยท (A~x) = k~xk โ kA~xk = 0, since a reflection preserves length. Thus
~v is perpendicular to w.
~
e. ~v = ~x + A~x = ~x + ref L ~x = 2projL ~x, by Definition 2.2.2, so that ~v is parallel to L.
67
Copyright c 2013 Pearson Education, Inc.
Chapter 2
2.2.13 By Theorem 2.2.2,
x1
u1
x
u1
x
=2
refL
ยท 1
โ 1
x2 u2 x2
u2
x2
.
2
(2u1 โ 1)x1 + 2u1 u2 x2
x
u
= 2(u1 x1 + u2 x2 ) 1 โ 1 =
.
2u1 u2 x1 + (2u22 โ 1)x2
x2
u2
2
2u1 โ 1 2u1 u2
a b
=
. Note that the sum of the diagonal entries is a + d =
The matrix is A =
c d
2u1 u2 2u22 โ 1
a
b
2(u21 + u22 ) โ 2 = 0, since ~u is a unit vector. It follows that d = โa. Since c = b, A is of the form
. Also,
b โa
a2 + b2 = (2u21 โ 1)2 + 4u21 u22 = 4u41 โ 4u21 + 1 + 4u21 (1 โ u21 ) = 1, as claimed.
2.2.14 a Proceeding as on Page 61/62 in the text, we find that A is the matrix whose ijth entry is ui uj :
๏ฃน
๏ฃฎ 2
u1
u1 u2 u1 u3
u22
u2 u3 ๏ฃป
A = ๏ฃฐ u2 u1
un u1 un u2
u23
b The sum of the diagonal entries is u21 + u22 + u23 = 1, since ~u is a unit vector.
2.2.15 According to the discussion on Page 60 in the text, refL (~x) = 2(~x ยท ~u)~u โ ~x
๏ฃฎ ๏ฃน ๏ฃฎ ๏ฃน
x1
u1
= 2(x1 u1 + x2 u2 + x3 u3 ) ๏ฃฐ u2 ๏ฃป โ ๏ฃฐ x2 ๏ฃป
x3
u3
๏ฃน
๏ฃน ๏ฃฎ
๏ฃฎ
(2u21 โ 1)x1
+2u2 u1 x2
+2u1 u3 x3
2×1 u21
+2×2 u2 u1 +2×3 u3 u1 โx1
+2u2 u3 x3 ๏ฃป.
+(2u22 โ 1)x2
+2×3 u3 u2 โx2 ๏ฃป = ๏ฃฐ 2u1 u2 x1
+2×2 u22
= ๏ฃฐ 2×1 u1 u2
2u1 u3 x1
+2u2 u3 x2
+(2u23 โ 1)x3
โx3
2×1 u1 u3 +2×2 u2 u3
+2×3 u23
๏ฃน
๏ฃฎ
2u2 u1
2u1 u3
(2u21 โ 1)
2u2 u3 ๏ฃป.
(2u22 โ 1)
So A = ๏ฃฐ 2u1 u2
2u1 u3
2u2 u3
(2u23 โ 1)
2.2.16 a See Figure 2.24.
b By Theorem 2.1.2, the matrix of T is [T (~e1 )
T (~e2 )].
T (~e2 ) is the unit vector in the fourth quadrant perpendicular to T (~e1 ) =
cos(2ฮธ)
, so that
sin(2ฮธ)
sin(2ฮธ)
cos(2ฮธ)
sin(2ฮธ)
. The matrix of T is therefore
.
โ cos(2ฮธ)
sin(2ฮธ) โ cos(2ฮธ)
u1
cos ฮธ
Alternatively, we can use the result of Exercise 13, with
=
to find the matrix
u2
sin ฮธ
2 cos2 ฮธ โ 1 2 cos ฮธ sin ฮธ
.
2 cos ฮธ sin ฮธ 2 sin2 ฮธ โ 1
T (~e2 ) =
You can use trigonometric identities to show that the two results agree. (See Figure 2.25.)
68
Copyright c 2013 Pearson Education, Inc.
Section 2.2
Figure 2.24: for Problem 2.2.16a.
Figure 2.25: for Problem 2.2.16b.
2.2.17 We want,
a
b
b
โa
v1
v2
=
av1
bv1
+bv2
โav2
=
v1
.
v2
Now, (a โ 1)v1 + bv2 = 0 and bv1 โ (a + 1)v2 , which is a system with solutions of the form
is an arbitrary constant.
Letโs choose t = 1, making ~v =
b
.
1โa
Similarly, we want Aw
~ = โw.
~ We perform a computation as above to reveal w
~ =
A quick check of ~v ยท w
~ = 0 reveals that they are indeed perpendicular.
aโ1
b
bt
, where t
(1 โ a)t
as a possible choice.
Now, any vector ~x in R can be written in terms of components with respect to L = span(~v ) as ~x = ~x|| + ~xโฅ =
c~v + dw.
~ Then, T (~x) = A~x = A(c~v + dw)
~ = A(c~v ) + A(dw)
~ = cA~v + dAw
~ = c~v โ dw
~ = ~x|| โ ~xโฅ = refL (~x), by
Definition 2.2.2.
(The vectors ~v and w
~ constructed above are both zero in the special case that a = 1 and b = 0. In that case, we
69
Copyright c 2013 Pearson Education, Inc.
Chapter 2
can let ~v = ~e1 and w
~ = ~e2 instead.)
b
0.8
2
2.2.18 From Exercise 17, we know that the reflection is about the line parallel to ~v =
=
= 0.4
.
1
โ
a
0.4
1
2
x
. So, y = k = 12 x, and y = 12 x is the line we are
=k
So, every point on this line can be described as
1
y
looking for.
๏ฃฎ
1 0
2.2.19 T (~e1 ) = ~e1 , T (~e2 ) = ~e2 , and T (~e3 ) = ~0, so that the matrix is ๏ฃฐ 0 1
0 0
๏ฃน
0
0 ๏ฃป.
0
๏ฃฎ
๏ฃน
0 0
โ1 0 ๏ฃป.
0 1
๏ฃฎ
๏ฃน
โ1 0
0 0 ๏ฃป. (See Figure 2.26.)
0 1
1
2.2.20 T (~e1 ) = ~e1 , T (~e2 ) = โ~e2 , and T (~e3 ) = ~e3 , so that the matrix is ๏ฃฐ 0
0
0
2.2.21 T (~e1 ) = ~e2 , T (~e2 ) = โ~e1 , and T (~e3 ) = ~e3 , so that the matrix is ๏ฃฐ 1
0
Figure 2.26: for Problem 2.2.21.
2.2.22 Sketch the ~e1 โ ~e3 plane, as viewed from the positive ~e2 axis.
Figure 2.27: for Problem 2.2.22.
70
Copyright c 2013 Pearson Education, Inc.
Section 2.2
๏ฃฎ
cos ฮธ
Since T (~e2 ) = ~e2 , the matrix is ๏ฃฐ 0
โ sin ฮธ
๏ฃน
0 sin ฮธ
1
0 ๏ฃป. (See Figure 2.27.)
0 cos ฮธ
๏ฃฎ
๏ฃน
0 1
1 0 ๏ฃป. (See Figure 2.28.)
0 0
0
2.2.23 T (~e1 ) = ~e3 , T (~e2 ) = ~e2 , and T (~e3 ) = ~e1 , so that the matrix is ๏ฃฐ 0
1
Figure 2.28: for Problem 2.2.23.
1
0
= ~v and A
= w.
~ Since A preserves length, both ~v and w
~ must be unit
0
1
0
1
are clearly perpendicular, ~v and w
~ must also
and
vectors. Furthermore, since A preserves angles and
1
0
be perpendicular.
2.2.24 a A = [ ~v
w
~ ] , so A
b Since w
~ is a unit vector perpendicular to ~v , it can be obtained by rotating ~v through 90 degrees, either in
the counterclockwise
or in the clockwise
direction.
Using the corresponding rotation matrices, we see that
0 โ1
โb
0 1
b
w
~=
~v =
or w
~=
~v =
.
1 0
a
โ1 0
โa
c Following part b, A is either of the form
reflection.
a
b
a
โb
, representing a rotation, or A =
b
a
b
, representing a
โa
1 k
1 โk
โ1
2.2.25 The matrix A =
represents a horizontal shear, and its inverse A =
represents such a
0 1
0
1
shear as well, but โthe other way.โ
2.2.26 a
k
0
0
k
4 0
8
2k
2
.
. So k = 4 and A =
=
=
0 4
โ4
โk
โ1
b This is the orthogonal projection onto the horizontal axis, with matrix B =
c
4
a โb
0
โ5b
3
5
=
=
. So a = 54 , b = โ 35 , and C =
โ 35
b a
5
5a
4
a rotation matrix.
3
5
4
5
1
0
0
.
0
. Note that a2 + b2 = 1, as required for
d Since the x1 term is being modified, this must be a horizontal shear.
71
Copyright c 2013 Pearson Education, Inc.
Chapter 2
Then
1 k
0 1
1
7
1 + 3k
1
. So k = 2 and D =
=
=
0
3
3
3
2
.
1
4
โ5
7a + b
7
a b
โ
4
3
. So a = โ 5 , b = 5 , and E = 35
=
=
e
5
7b โ a
1
b โa
5
for a reflection matrix.
3
5
4
5
. Note that a2 + b2 = 1, as required
2.2.27 Matrix B clearly represents a scaling.
Matrix C represents a projection, by Definition 2.2.1, with u1 = 0.6 and u2 = 0.8.
Matrix E represents a shear, by Theorem 2.2.5.
Matrix A represents a reflection, by Definition 2.2.2.
Matrix D represents a rotation, by Definition 2.2.3.
2.2.28 a D is a scaling, being of the form
k
0
0
.
k
b E is the shear, since it is the only matrix which has the proper form (Theorem 2.2.5).
c C is the rotation, since it fits Theorem 2.2.3.
d A is the projection, following the form given in Definition 2.2.1.
e F is the reflection, using Definition 2.2.2.
2.2.29 To check that L is linear, we verify the two parts of Theorem 2.1.3:
a) Use the hint and apply L to both sides of the equation ~x + ~y = T (L(~x) + L(~y )):
L(~x + ~y ) = L(T (L(~x) + L(~y ))) = L(~x) + L(~y ) as claimed.
b)L (k~x) = L (kT (L (~x))) = L (T (kL (~x))) = kL (~x) , as claimed
โ
โ
~x = T (L (~x))
T is linear
x1
= x1~v1 + x2~v2 . We must choose ~v1 and ~v2 in such a way that
x2
1
, for all x1 and x2 . This is the case if (and only if) both ~v1 and
x1~v1 + x2~v2 is a scalar multiple of the vector
2
1
~v2 are scalar multiples of
.
2
1 0
0
1
.
, so that A =
and ~v2 =
For example, choose ~v1 =
2 0
0
2
2.2.30 Write A = [ ~v1
~v2 ]; then A~x = [~v1
~v2 ]
72
Copyright c 2013 Pearson Education, Inc.
Section 2.2
๏ฃน
x1
2.2.31 Write A = [~v1 ~v2 ~v3 ]; then A~x = [~v1 ~v2 ~v3 ] ๏ฃฐ x2 ๏ฃป = x1~v1 + x2~v2 + x3~v3 .
x3
๏ฃฎ
๏ฃฎ ๏ฃน
1
We must choose ~v1 , ~v2 , and ~v3 in such a way that x1~v1 + x2~v2 + x3~v3 is perpendicular to w
~ = ๏ฃฐ 2 ๏ฃป for all
3
x1 , x2 , and x3 . This is the case if (and only if) all the vectors ~v1 , ~v2 , and ~v3 are perpendicular to w,
~ that is, if
~v1 ยท w
~ = ~v2 ยท w
~ = ~v3 ยท w
~ = 0.
๏ฃฎ
๏ฃฎ
๏ฃน
๏ฃน
โ2
โ2 0 0
For example, we can choose ~v1 = ๏ฃฐ 1 ๏ฃป and ~v2 = ~v3 = ~0, so that A = ๏ฃฐ 1 0 0 ๏ฃป.
0
0 0 0
2.2.32 a See Figure 2.29.
Figure 2.29: for Problem 2.2.32a.
b Compute D~v =
cos ฮฑ
sin ฮฑ
โ sin ฮฑ
cos ฮฑ
cos ฮฒ
sin ฮฒ
=
cos ฮฑ cos ฮฒ โ sin ฮฑ sin ฮฒ
.
sin ฮฑ cos ฮฒ + cos ฮฑ sin ฮฒ
Comparing this result with our finding in part (a), we get the addition theorems
cos(ฮฑ + ฮฒ) = cos ฮฑ cos ฮฒ โ sin ฮฑ sin ฮฒ
sin(ฮฑ + ฮฒ) = sin ฮฑ cos ฮฒ โ cos ฮฑ sin ฮฒ
2.2.33 Geometrically, we can find the representation ~v = ~v1 + ~v2 by means of a parallelogram, as shown in Figure
2.30.
To show the existence and uniqueness of this representation algebraically,
~ 1 in L1 and
choose a nonzero vector w
x
1
= ~0 has only the solution x1 = x2 = 0
a nonzero w
~ 2 in L2 . Then the system x1 w
~ 1 + x2 w
~ 2 = ~0 or [w
~1 w
~ 2]
x2
(if x1 w
~ 1 + x2 w
~ 2 = ~0 then x1 w
~ 1 = โx2 w
~ 2 is both in L1 and in L2 , so that it must be the zero vector).
73
Copyright c 2013 Pearson Education, Inc.
Chapter 2
Figure 2.30: for Problem 2.2.33.
x1
= ~v has a unique solution x1 , x2 for all ~v in R2 (by
x2
Theorem 1.3.4). Now set ~v1 = x1 w
~ 1 and ~v2 = x2 w
~ 2 to obtain the desired representation ~v = ~v1 + ~v2 . (Compare
with Exercise 1.3.57.)
Therefore, the system x1 w
~ 1 + x2 w
~ 2 = ~v or [w
~1 w
~ 2]
To show that the transformation T (~v ) = ~v1 is linear, we will verify the two parts of Theorem 2.1.3.
Let ~v = ~v1 + ~v2 , w
~ =w
~1 + w
~ 2 , so that ~v + w
~ = (~v1 + w
~ 1 ) + (~v2 + w
~ 2 ) and k~v = k~v1 + k~v2 .
โ
in L1
โ
โ
โ
โ
in L1 in L2 in L1 in L2
โ
โ
in L1 in L2
โ
in L2
a. T (~v + w)
~ = ~v1 + w
~ 1 = T (~v ) + T (w),
~ and
b. T (k~v ) = k~v1 = kT (~v ), as claimed.
2.2.34 Keep in mind that the columns of the matrix of a linear transformation T from R3 to R3 are T (~e1 ), T (~e2 ),
and T (~e3 ).
If T is the orthogonal projection onto a line L, then T (~x) will be on L for all ~x in R3 ; in particular, the three
columns of the matrix of T will be on L, and therefore pairwise parallel. This is the case only for matrix B: B
represents an orthogonal projection onto a line.
A reflection transforms orthogonal vectors into orthogonal vectors; therefore, the three columns of its matrix
must be pairwise orthogonal. This is the case only for matrix E: E represents the reflection about a line.
2.2.35 If the vectors ~v1 and ~v2 are defined as shown in Figure 2.27, then the parallelogram P consists of all vectors
of the form ~v = c1~v1 + c2~v2 , where 0 โค c1 , c2 โค 1.
The image of P consists of all vectors of the form T (~v ) = T (c1~v1 + c2~v2 ) = c1 T (~v1 ) + c2 T (~v2 ).
These vectors form the parallelogram shown in Figure 2.31 on the right.
2.2.36 If the vectors ~v0 , ~v1 , and ~v2 are defined as shown in Figure 2.28, then the parallelogram P consists of all
vectors ~v of the form ~v = ~v0 + c1~v1 + c2~v2 , where 0 โค c1 , c2 โค 1.
The image of P consists of all vectors of the form T (~v ) = T (~v0 + c1~v1 + c2~v2 ) = T (~v0 ) + c1 T (~v1 ) + c2 T (~v2 ).
These vectors form the parallelogram shown in Figure 2.32 on the right.
2.2.37 a By Definition 2.2.1, a projection has a matrix of the form
u21
u1 u2
u1
u1 u2
,
where
is a unit vector.
u2
u22
74
Copyright c 2013 Pearson Education, Inc.
Section 2.2
Figure 2.31: for Problem 2.2.35.
Figure 2.32: for Problem 2.2.36.
So the trace is u21 + u22 = 1.
b By Definition 2.2.2, reflection matrices look like
a
b
b
, so the trace is a โ a = 0.
โa
c According to Theorem 2.2.3, a rotation matrix has the form
for some ฮธ. Thus, the trace is in the interval [โ2, 2].
cos ฮธ
sin ฮธ
โ sin ฮธ
, so the trace is cos ฮธ +cos ฮธ = 2 cos ฮธ
cos ฮธ
1 k
1 0
, depending on whether it represents
or
0 1
k 1
a vertical or horizontal shear. In both cases, however, the trace is 1 + 1 = 2.
d By Theorem 2.2.5, the matrix of a shear appears as either
2.2.38 a A =
a
b A=
b
c A=
a
b
u21
u1 u2
u1 u2
, so det(A) = u21 u22 โ u1 u2 u1 u2 = 0.
u22
b
, so det(A) = โa2 โ b2 = โ(a2 + b2 ) = โ1.
โa
โb
, so det(A) = a2 โ (โb2 ) = a2 + b2 = 1.
a
75
Copyright c 2013 Pearson Education, Inc.
Chapter 2
1 k
d A=
0 1
or
1
k
0
, both of which have determinant equal to 12 โ 0 = 1.
1
๏ฃฎ
๏ฃน
๏ฃน
๏ฃฎ
1
1
1
1
1 1
2 ๏ฃป . The matrix ๏ฃฐ 2
2 ๏ฃป represents an orthogonal projection (Definition
2.2.39 a Note that
= 2๏ฃฐ 2
1 1
1
1
1
1
2
2
2
” โ2 #2
u1
1 1
2.2.1), with ~u =
= โ22 . So,
represents a projection combined with a scaling by a factor of 2.
u2
1 1
2
b This lookssimilar to a shear,
with
the one zero off the diagonal. Since the two diagonal entries are identical, we
3 0
1 0
, showing that this matrix represents a vertical shear combined with a scaling
can write
=3
โ1 3
โ 13 1
by a factor of 3.
c We are asked to write
” 3
4 #
k
k
3
4
4 โ3
=k
” 3
k
4
k
4 #
k
โ k3
, with our scaling factor k yet to be determined. This matrix,
a
b
b
โa
. This form further requires that 1 = a2 + b2 =
has the form of a reflection matrix
3
4
โ
k
k
( k3 )2 + ( k4 )2 , or k = 5. Thus, the matrix represents a reflection combined with a scaling by a factor of 5.
2.2.40 ~x = projP ~x + projQ ~x, as illustrated in Figure 2.33.
Figure 2.33: for Problem 2.2.40.
Figure 2.34: for Problem 2.2.41.
2.2.41 refQ ~x = โrefP ~x since refQ ~x, refP ~x, and ~x all have the same length, and refQ ~x and refP ~x enclose an angle of
2ฮฑ + 2ฮฒ = 2(ฮฑ + ฮฒ) = ฯ. (See Figure 2.34.)
2.2.42 T (~x) = T (T (~x)) since T (~x) is on L hence the projection of T (~x) onto L is T (~x) itself.
76
Copyright c 2013 Pearson Education, Inc.
Section 2.2
2.2.43 Since ~y = A~x is obtained from ~x by a rotation through ฮธ in the counterclockwise direction, ~x is obtained
from ~y by a rotation through ฮธ in the clockwise direction, that is, a rotation through โฮธ. (See Figure 2.35.)
Figure 2.35: for Problem 2.2.43.
use the formula in Exercise 2.1.13b to check this result.
b
.
a
Therefore, the matrix of the inverse transformation is Aโ1 =
โ1
2.2.44 By Exercise 1.1.13b, A
a
=
b
โb
a
โ1
1
= a2 +b
2
a
โb
cos(โฮธ) โ sin(โฮธ)
cos ฮธ sin ฮธ
=
. You can
sin(โฮธ) cos(โฮธ)
โ sin ฮธ cos ฮธ
If A represents a rotation through ฮธ followed by a scaling by r, then Aโ1 represents a rotation through โฮธ
followed by a scaling by 1r . (See Figure 2.36.)
Figure 2.36: for Problem 2.2.44.
2.2.45 By Exercise 2.1.13, Aโ1 = โa21โb2
โa
โb
โb
โa
= โ(a21+b2 )
a
โb
โb
โa
= โ1
a
โb
โb
a
=
a
b
b
.
โa
So Aโ1 = A, which makes sense. Reflecting a vector twice about the same line will return it to its original state.
a
k
b
k
b
k
โ ka
2.2.47 a. Let A =
a
c
b
d
, where the matrix B =
a
k
b
k
b
k
โ ka
represents a reflection. It is required
โ
a b
1
that ( ka )2 + ( kb )2 = 1, meaning that a2 + b2 = k 2 , or, k = a2 + b2 . Now Aโ1 = a2 +b
= k12 A = k1 B,
2
b โa
for the reflection matrix B and the scaling factor k introduced above. In summary: If A represents a reflection
combined with a scaling by k, then Aโ1 represents the same reflection combined with a scaling by k1 .
2.2.46 We want to write A = k
. Then
77
Copyright c 2013 Pearson Education, Inc.
Chapter 2
f (t) =
a
c
b
d
cos t
sin t
a
ยท
c
b
d
โ sin t
cos t
=
a cos t + b sin t
c cos t + d sin t
โa sin t + b cos t
ยท
โc sin t + d cos t
= (a cos t + b sin t)(โa sin t + b cos t) + (c cos t + d sin t)(โc sin t + d cos t),
a continuous function.
0
1
0
1
โ1
0
and f ฯ2 = T
ยท T
b. f (0) = T
= โf (0)
ยท T
=โ T
ยท T
1
0
1
0
0
1
c. If f (0) = f ฯ2 = 0, then we can let c = 0.
If f (0) and f ฯ2 are both nonzero, with opposite signs (by part b), then the intermediate value theorem (with
L = 0) guarantees that there exists a c between 0 and ฯ2 with f (c) = 0. See Figure 2.37.
Figure 2.37: for Problem 2.2.47c.
2.2.48 Since rotations preserve angles, any two perpendicular unit vectors ~v1 and ~v2 will do the job.
2.2.49 a. A straightforward computation gives f (t) = 15 cos(t) sin(t).
b. The equation f (t) = 0 has the solutions c = 0 and c = ฯ2 .
0
โ sin c
1
cos c
.
=
and ~v2 =
=
c. Using c = 0 we find ~v1 =
1
cos c
0
sin c
2.2.50 a. f (t) = cos(t) sin(t) + cos2 (t) โ sin2 (t) = 12 sin(2t) + cos(2t)
2
โ 1.017222 โ 1.02
b. The equation f (t) = 0 has the solution c = ฯโarctan
2
cos c
0.526
โ sin c
โ0.851
c. ~v1 =
โ
and ~v2 =
โ
sin c
0.851
cos c
0.526
2.2.51 a. f (t) = 4 cos2 (t) โ sin2 (t)
b. The equation f (t) = 0 has the solution c = ฯ4 .
1
โ1
โ sin c
cos c
and ~v2 =
= โ12
c. ~v1 =
= โ12
1
1
cos c
sin c
2.2.52 a. f (t) = 15 sin2 (t) โ cos2 (t)
b. The equation f (t) = 0 has the solution c = ฯ4 . See Figure 2.38.
โ sin c
1
โ1
cos c
= โ12
and ~v2 =
See Figure 2.39.
= โ12
c. ~v1 =
cos c
1
1
sin c
78
Copyright c 2013 Pearson Education, Inc.
Section 2.2
Figure 2.38: for Problem 2.2.52.
Figure 2.39: for Problem 2.2.52.
cos(t)
2.2.53 If ~x =
sin(t)
5 0
then T (~x) =
0 2
cos(t)
5 cos(t)
5
0
=
= cos(t)
+ sin(t)
.
sin(t)
2 sin(t)
0
2
Thesevectors
form an
consider the characterization of an ellipse given in the footnote on Page 75, with
ellipse;
5
0
w
~1 =
and w
~2 =
. (See Figure 2.40.)
0
2
Figure 2.40: for Problem 2.2.53.
2.2.54 Use the hint: Since the vectors on the unit circle are of the form ~v = cos(t)~v1 + sin(t)~v2 , the image of the
unit circle consists of the vectors of the form T (~v ) = T (cos(t)~v1 + sin(t)~v2 ) = cos(t)T (~v1 ) + sin(t)T (~v2 ).
79
Copyright c 2013 Pearson Education, Inc.
Chapter 2
Figure 2.41: for Problem 2.2.54.
These vectors form an ellipse: Consider the characterization of an ellipse given in the footnote, with w
~ 1 = T (~v1 )
and w
~ 2 = T (~v2 ). The key point is that T (~v1 ) and T (~v2 ) are perpendicular. See Figure 2.41.
2.2.55 Consider the linear transformation T with matrix A = [w
~1
x
x
x1
~ 1 + x2 w
~ 2.
~1 w
~ 2 ] 1 = x1 w
= A 1 = [w
T
x2
x2
x2
w
~ 2 ], that is,
cos(t)
is on the unit circle,
The curve C is the image of the unit circle under the transformation T : if ~v =
sin(t)
then T (~v ) = cos(t)w
~ 1 + sin(t)w
~ 2 is on the curve C. Therefore, C is an ellipse, by Exercise 54. (See Figure
2.42.)
Figure 2.42: for Problem 2.2.55.
2.2.56 By definition, the vectors ~v on an ellipse E are of the form ~v = cos(t)~v1 + sin(t)~v2 , for some perpendicular vectors ~v1 and ~v2 . Then the vectors on the image C of E are of the form T (~v ) = cos(t)T (~v1 ) +
sin(t)T (~v2 ). These vectors form an ellipse, by Exercise 55 (with w
~ 1 = T (~v1 ) and w
~ 2 = T (~v2 )). See Figure
2.43.
80
Copyright c 2013 Pearson Education, Inc.
Section 2.3
Figure 2.43: for Problem 2.2.56.
Section 2.3
2.3.1
4 6
3 4
๏ฃฎ
2
2.3.2 ๏ฃฐ 2
7
๏ฃน
2
0๏ฃป
4
2.3.3 Undefined
2.3.4
4
4
โ8 โ8
๏ฃฎ
๏ฃน
b
d๏ฃป
0
a
2.3.5 ๏ฃฐ c
0
2.3.6
0 0
0 0
๏ฃฎ
โ1
2.3.7 ๏ฃฐ 5
โ6
๏ฃน
1
0
3
4๏ฃป
โ2 โ4
2.3.8
ad โ bc
0
2.3.9
0 0
0 0
๏ฃฎ
0
ad โ bc
1 2
2.3.10 ๏ฃฐ 2 4
3 6
๏ฃน
3
6๏ฃป
9
2.3.11 [10]
81
Copyright c 2013 Pearson Education, Inc.
Chapter 2
2.3.12 [0 1]
2.3.13 [h]
๏ฃฎ
โ2
2
2
2.3.14 A2 =
, BC = [14 8 2], BD = [6], C 2 = ๏ฃฐ 4
2 2
10
๏ฃฎ ๏ฃน
5
DE = ๏ฃฐ 5 ๏ฃป, EB = [5 10 15], E 2 = [25]
5
๏ฃฎ ๏ฃน
๏ฃน
๏ฃฎ
โ2 โ2
0
1 2
1 โ2 ๏ฃป , CD = ๏ฃฐ 3 ๏ฃป , DB = ๏ฃฐ 1 2
4 โ2
6
1 2
๏ฃน
3
3 ๏ฃป,
3
๏ฃน ๏ฃฎ” #” #๏ฃน
”
#” # ” #
# ” # ” #
๏ฃน
๏ฃฎ
1 0 0
0
1 0
1
0
0
1
+
+
1 0
[
4
]
[
3
]
๏ฃบ
๏ฃฏ 0 1
0 1″ #0
0
2
0
0 ๏ฃบ
2
๏ฃบ=๏ฃฏ
” #
2.3.15 ๏ฃฏ
๏ฃป = ๏ฃฐ 2 0๏ฃป
๏ฃป ๏ฃฐ
๏ฃฐ
0
1
+[ 4 ][ 4 ]
+[ 4 ][ 3 ] [ 1 3 ]
[1 3]
19 16
[ 19 ] [ 16 ]
0
2
๏ฃฎ”
๏ฃฎ”
1
๏ฃฏ 0
2.3.16 ๏ฃฏ
๏ฃฐ”0
0
0
1
1# “3
0
1
0
3
# ”
2
1
+
4# “0
2
1
+
4
0
# ”
0 0
1 #” 0
0 0
1 0
#”
0
1
0# “0
0
0
0
0
# ”
0 2
1 #” 4
0 2
0 4
3
1
+
5# “0
3
1
+
5
0
#”
2.3.17 We must find all S such that SA = AS, or
# ”
a
c
b
d
0 1
1 #” 3
0 1
1 3
#”
๏ฃฎ”
#๏ฃน
2
1
๏ฃฏ 3
4#๏ฃบ
๏ฃบ = ๏ฃฏ”
2 ๏ฃป ๏ฃฐ 0
0
4
1
1 0
=
0
0 2
0
2
a
c
2 3
4# “7
0 1
0 3
#”
#๏ฃน
๏ฃฎ
1
5
๏ฃบ
9 # ๏ฃบ ๏ฃฏ3
=
2 ๏ฃป ๏ฃฐ0
4
0
2
4
0
0
3
7
1
3
๏ฃน
5
9๏ฃบ
๏ฃป
2
4
b
.
d
b
, meaning that b = 2b and c = 2c, so b and c must be zero.
2d
1 0
a 0
.
) commute with
We see that all diagonal matrices (those of the form
0 2
0 d
So
a
c
a
2b
=
2c
2d
a b
2.3.18 Following the form of Exercise 17, we let A =
.
c d
a b
2 3
2 3
a b
Now we want
=
.
c d
โ3 2
โ3 2
c d
2a โ 3b 3a + 2b
2a + 3c
2b + 3d
So,
=
, revealing that a = d (since 3a + 2b = 2b + 3d) and โb =
2c โ 3d 3c + 2d
โ3a + 2c โ3b + 2d
c (since 2a + 3c = 2a โ 3b).
a b
Thus B is any matrix of the form
.
โb a
a b
a b
0 โ2
0 โ2
a
2.3.19 Again, let A =
. We want
=
c d
c d
2 0
2 0
c
2b โ2a
โ2c โ2d
Thus,
=
, meaning that c = โb and d = a.
2d โ2c
2a
2b
b
.
d
82
Copyright c 2013 Pearson Education, Inc.
Section 2.3
We see that all matrices of the form
2.3.20 As in Exercise 2.3.17, we let A =
2a + b
a + 2c
=
2c + d
c
0 โ2
.
2 0
a
โb
b
a
a
b
. Now we want
c
d
a
c
commute with
b
d
a
1 2
1 2
=
c
0 1
0 1
b
.
d
b + 2d
, revealing that c = 0 (since a + 2c = a) and a = d (since b + 2d = 2a + b).
d
a b
Thus B is any matrix of the form
.
0 a
So,
a
c
1 2
1 2
a b
=
.
2 โ1
2 โ1
c d
a + 2b 2a โ b
a + 2c b + 2d
Thus,
=
. So a + 2b = a + 2c, or c = b, and 2a โ b = b + 2d, revealing
c + 2d 2c โ d
2a โ c 2b โ d
d = a โ b. (The other two equations are redundant.)
1 2
a
b
.
commute with
All matrices of the form
2 โ1
b aโb
2.3.21 Now we want
a
c
b
d
a b
a b
1 1
1 1
a b
2.3.22 As in Exercise 17, we let A =
. Now we want
=
.
c d
c d
1 1
1 1
c d
a+b a+b
a+c b+d
So,
=
, revealing that a = d (since a + b = b + d) and b = c (since a + c = a + b).
c+d c+d
a+c b+d
a b
Thus B is any matrix of the form
.
b a
1 3
1 3
a b
2.3.23 We want
=
.
2 6
2 6
c d
a + 2b 3a + 6b
a + 3c
b + 3d
Then,
=
. So a + 2b = a + 3c, or c = 23 b, and 3a + 6b = b + 3d, revealing
c + 2d 3c + 6d
2a + 6c 2b + 6d
d = a + 35 b. The other two equations are redundant.
1 3
a
b
commute
with
.
Thus all matrices of the form 2
5
2 6
3b a + 3b
a
c
b
d
๏ฃฎ
๏ฃน
a b c
2.3.24 Following the form of Exercise 2.3.17, we let A = ๏ฃฐ d e f ๏ฃป .
g h i
๏ฃน
๏ฃน๏ฃฎ
๏ฃน ๏ฃฎ
๏ฃน๏ฃฎ
๏ฃฎ
a b c
2 0 0
2 0 0
a b c
Then we want ๏ฃฐ d e f ๏ฃป ๏ฃฐ 0 2 0 ๏ฃป = ๏ฃฐ 0 2 0 ๏ฃป ๏ฃฐ d e f ๏ฃป.
g h i
0 0 3
0 0 3
g h i
๏ฃฎ
๏ฃน
๏ฃน ๏ฃฎ
2a 2b 2c
2a 2b 3c
So, ๏ฃฐ 2d 2e 3f ๏ฃป = ๏ฃฐ 2d 2e 2f ๏ฃป . Thus c, f, g and h must be zero, leaving B to be any matrix of the form
3g 3h 3i
2g 2h 3i
83
Copyright c 2013 Pearson Education, Inc.
Chapter 2
๏ฃฎ
a
๏ฃฐd
0
b
e
0
๏ฃน
0
0๏ฃป.
i
๏ฃฎ
๏ฃน๏ฃฎ
๏ฃน ๏ฃฎ
๏ฃน๏ฃฎ
๏ฃน
a b c
2 0 0
2 0 0
a b c
2.3.25 Now we want ๏ฃฐ d e f ๏ฃป ๏ฃฐ 0 3 0 ๏ฃป = ๏ฃฐ 0 3 0 ๏ฃป ๏ฃฐ d e f ๏ฃป,
g h i
0 0 2
0 0 2
g h i
๏ฃน
๏ฃน ๏ฃฎ
๏ฃฎ
2a 2b 2c
2a 3b 2c
or, ๏ฃฐ 2d 3e 2f ๏ฃป = ๏ฃฐ 3d 3e 3f ๏ฃป. So, 3b = 2b, 2d = 3d, 3f = 2f and 3h = 2h, meaning that b, d, f and h
2g 2h 2i
2g 3h 2i
must all be zero.
๏ฃน
๏ฃฎ
๏ฃน
๏ฃฎ
2 0 0
a 0 c
Thus all matrices of the form ๏ฃฐ 0 e 0 ๏ฃป commute with ๏ฃฐ 0 3 0 ๏ฃป.
0 0 2
g 0 i
๏ฃฎ
๏ฃน
a b c
2.3.26 Following the form of Exercise 2.3.17, we let A = ๏ฃฐ d e f ๏ฃป .
g h i
๏ฃน
๏ฃน๏ฃฎ
๏ฃน ๏ฃฎ
๏ฃน๏ฃฎ
๏ฃฎ
a b c
2 0 0
2 0 0
a b c
Now we want ๏ฃฐ d e f ๏ฃป ๏ฃฐ 0 3 0 ๏ฃป = ๏ฃฐ 0 3 0 ๏ฃป ๏ฃฐ d e f ๏ฃป.
g h i
0 0 4
0 0 4
g h i
๏ฃน
๏ฃฎ
๏ฃน
๏ฃฎ
2a 2b 2c
2a 3b 4c
So, ๏ฃฐ 2d 3e 4f ๏ฃป = ๏ฃฐ 3d 3e 3f ๏ฃป , which forces b, c, d, f, g and h to be zero. a, e and i, however, can be
4g 4h 4i
2g 3h 4i
chosen freely.
๏ฃฎ
๏ฃน
a 0 0
Thus B is any matrix of the form ๏ฃฐ 0 e 0 ๏ฃป .
0 0 i
2.3.27 We will prove that A(C + D) = AC + AD, repeatedly using Theorem 1.3.10a: A(~x + ~y ) = A~x + A~y .
Write B = [~v1 . . . ~vm ] and C = [w
~1 . . . w
~ m ]. Then
A(C + D) = A[~v1 + w
~ 1 ยท ยท ยท ~vm + w
~ m ] = [A~v1 + Aw
~ 1 ยท ยท ยท A~vm + Aw
~ m ], and
AC + AD = A[~v1 ยท ยท ยท ~vm ] + A[w
~1 ยท ยท ยท w
~ m ] = [A~v1 + Aw
~ 1 ยท ยท ยท A~vm + Aw
~ m ].
The results agree.
2.3.28 The ijth entries of the three matrices are
p
X
h=1
.
(kaih )bhj ,
p
X
h=1
aih (kbhj ), and k
p
X
aih bhj
h=1
The three results agree.
84
Copyright c 2013 Pearson Education, Inc.
!
Section 2.3
2.3.29 a Dฮฑ Dฮฒ and Dฮฒ Dฮฑ are the same transformation, namely, a rotation through ฮฑ + ฮฒ.
b Dฮฑ Dฮฒ =
cos ฮฑ โ sin ฮฑ
sin ฮฑ
cos ฮฑ
=
cos ฮฑ cos ฮฒ โ sin ฮฑ sin ฮฒ
sin ฮฑ cos ฮฒ + cos ฮฑ sin ฮฒ
cos(ฮฑ + ฮฒ)
=
sin(ฮฑ + ฮฒ)
cos ฮฒ โ sin ฮฒ
sin ฮฒ
cos ฮฒ
โ cos ฮฑ sin ฮฒ โ sin ฮฑ cos ฮฒ
โ sin ฮฑ sin ฮฒ + cos ฮฑ cos ฮฒ
โ sin(ฮฑ + ฮฒ)
cos(ฮฑ + ฮฒ)
Dฮฒ Dฮฑ yields the same answer.
2.3.30 a See Figure 2.44.
Figure 2.44: for Problem 2.4.30.
The vectors ~x and T (~x) have the same length (since reflections leave the length unchanged), and they enclose an
angle of 2(ฮฑ + ฮฒ) = 2 ยท 30โฆ = 60โฆ
b Based on the answer in part (a), we conclude that T is a rotation through 60โฆ .
c The matrix of T is
โฆ
cos(60 )
sin(60โฆ )
โฆ
โ sin(60 )
cos(60โฆ )
๏ฃฎ
1
= ๏ฃฐ โ2
3
2
โ
โ 23
1
2
๏ฃน
๏ฃป.
๏ฃน
w
~1
~ ๏ฃบ
๏ฃฏw
2.3.31 Write A in terms of its rows: A = ๏ฃฐ 2 ๏ฃป (suppose A is n ร m).
ยทยทยท
w
~n
๏ฃฎ
We can think of this as a partition into n
85
Copyright c 2013 Pearson Education, Inc.
Chapter 2
๏ฃน
๏ฃฎ
๏ฃน
w
~ 1B
w
~1
~ B๏ฃบ
~ ๏ฃบ
๏ฃฏw
๏ฃฏw
1 ร m matrices. Now AB = ๏ฃฐ 2 ๏ฃป B = ๏ฃฐ 2 ๏ฃป (a product of partitioned matrices).
ยทยทยท
ยทยทยท
w
~ nB
w
~n
๏ฃฎ
We see that the ith row of AB is the product of the ith row of A and the matrix B.
a b
1 0
1 0
a b
1 0
1 0
a b
, or
=
X, or
=
. Then we want X
2.3.32 Let X =
0 0
0 0
cd
c d 0 0
0 0 c d
a 0
a b
0 1
0 1
a 0
0 1
=
, meaning that b = c = 0. Also, we want X
=
X, or
=
c
0
0
0
0
0
0
0
0
d
0 0
a 0
0 d
0 a
a 0
0 1
= aI2 must be a multiple of the identity
so a = d. Thus, X =
=
, or
0 a
0 0
0 0
0 d
0 0
matrix. (X will then commute with any 2 ร 2 matrix M , since XM = aM = M X.)
2.3.33 A2 = I2 , A3 = A, A4 = I2 . The power An alternates between A = โI2 and I2 . The matrix A describes
a reflection about the origin. Alternatively one can say A represents a rotation by 180โฆ = ฯ. Since A2 is the
โ1 0
.
identity, A1000 is the identity and A1001 = A =
0 โ1
2.3.34 A2 = I2 , A3 = A, A4 = I2 . The power An alternates between A and I2 . The matrix
a reflection
A describes
1 0
2
1000
1001
about the x axis. Because A is the identity, A
is the identity and A
=A=
.
0 โ1
2.3.35 A2 = I2 , A3 = A, A4 = I2 . The power An alternates between A and I2 . The matrix A describes
a reflection
0 1
2
1000
1001
.
about the diagonal x = y. Because A is the identity, A
is the identity and A
=A=
1 0
2
1
0
1 0
โ2 1
2
1
3
1
0
3
1
1 0
โ3 1
4
1
0
4
1
. The power An represents a horizontal shear along the
1 1001
x-axis. The shear strength increases linearly in n. We have A1001 =
.
0
1
2.3.36 A =
2
,A =
3
and A =
4
1 0
โ4 1
. The power An represents a vertical shear along
1
0
1001
the y axis. The shear magnitude increases linearly in n. We have A
=
.
โ1001 1
2.3.37 A =
,A =
and A =
โ1 0
, A3 = โA, A4 = I2 . The matrix A represents the rotation through ฯ/2 in the
0 โ1
counterclockwise
Since A4 is the identity matrix, we know that A1000 is the identity matrix and
direction.
0 โ1
.
A1001 = A =
1 0
2.3.38 A2 =
0 1
โ1 1
, A3 = โ12
, A4 = โI2 . The matrix A describes a rotation by ฯ/4 in the
โ1 0
โ1 1
clockwise direction.
Because
A8 is the identity matrix, we know that A1000 is the identity matrix and A1001 =
โ
1 1
.
A = (1/ 2)
โ1 1
2.3.39 A2 =
86
Copyright c 2013 Pearson Education, Inc.
Section 2.3
โ
โ1
3
โ
, A3 = I2 , A4 = A. The matrix A describes a rotation by 120โฆ = 2ฯ/3 in the
2.3.40 A =
โ 3 โ1
3
999
counterclockwise direction.
Because
is the identity matrix and
โ A is the identity matrix, we know that A
3
โ1
1
1001
2
โ1
โ
.
A
=A =A = 2
โ 3 โ1
2
1
2
2.3.41 A2 = I2 , A3 = A, A4 = I2 . The power An alternates between I2 for even n and A for odd n. Therefore
A1001 = A. The matrix represents a reflection about a line.
n
2.3.42 A = A. The matrix A represents a projection on the line x = y spanned by the vector
1 1
1001
A
= A = (1/2)
.
1 1
2.3.43 An example is A =
1
0
0
โ1
2.3.45 For example, A = (1/2)
. We have
0 โ1
1 0
.
โ
โ 3
, the rotation through 2ฯ/3. See Problem 2.3.40.
โ1
โ1
โ
3
, the orthogonal projection onto the line spanned by
1
1
.
, the orthogonal projection onto the line spanned by
1
1
.
2.3.46 For example, A = 12
1
1
1
1
For example, A = 12
1
1
1
1
2.3.48 For example, the shear A =
2.3.47
1
1
, representing the reflection about the horizontal axis.
2.3.44 A rotation by ฯ/2 given by the matrix A =
1 1/10
0
1
.
1 0
โ1 0
2.3.49 AF =
represents the reflection about the x-axis, while F A =
represents the reflection
0 โ1
0 1
about the y-axis. (See Figure 2.45.)
0
0 1
represents a reflection about the line x = y, while GC =
โ1
1 0
about the line x = โy. (See Figure 2.46.)
2.3.50 CG =
2.3.51 F J = JF =
Figure 2.47.)
โ1 โ1
1 โ1
โ1
0
represents a reflection
both represent a rotation through 3ฯ/4 combined with a scaling by
โ
2. (See
0.2 โ1.4
2.3.52 JH = HJ =
. Since H represents a rotation and J represents a rotation through ฯ/4 combined
1.4 0.2
โ
with a scaling
โ by 2, the products in either order will be the same, representing a rotation combined with a
scaling by 2. (See Figure 2.48.)
87
Copyright c 2013 Pearson Education, Inc.
Chapter 2
A
F
AF
A
F
FA
Figure 2.45: for Problem 2.3.49.
0 1
0 โ1
represents the rotation
represents the rotation through ฯ/2, while DC =
โ1 0
1 0
through โฯ/2. (See Figure 2.49.)
2.3.53 CD =
โ0.6 โ0.8
represents the rotation through the angle ฮธ = arccos(โ0.6) โ 2.21, while EB =
2.3.54 BE =
0.8
โ0.6
โ0.6 0.8
represents the rotation through โฮธ. (See Figure 2.50.)
โ0.8 โ0.6
2.3.55 We need to solve the matrix equation
1
2
2
4
a
c
b
d
=
0 0
0 0
,
which amounts to solving the system a + 2c = 0, 2a + 4c = 0, b + 2d = 0 and 2b + 4d = 0. The solutions are of
โ2c โ2d
, where c, d are arbitrary constants.
the form a = โ2c and b = โ2d. Thus X =
c
d
88
Copyright c 2013 Pearson Education, Inc.
Section 2.3
C
G
CG
C
G
GC
Figure 2.46: for Problem 2.3.50.
2.3.56 Proceeding as in Exercise 55, we find X =
2.3.57 We need to solve the matrix equation
1
3
2
5
โ5 2
3 โ1
a
c
b
d
.
=
1 0
0 1
,
which amounts to
solving the system a + 2c = 1, 3a + 5c = 0, b + 2d = 0 and 3b + 5d = 1. The solution is
โ5 2
X=
.
3 โ1
2.3.58 Proceeding as in Exercise 55, we find X =
2.3.59 The matrix equation
a
c
b
d
b
d
2 1
4 2
โ2b
โ2d
, where b and d are arbitrary.
=
1 0
0 1
has no solutions, since we have the inconsistent equations 2a + 4b = 1 and a + 2b = 0.
89
Copyright c 2013 Pearson Education, Inc.
Chapter 2
F
J
and
FJ
and
F
J
JF
Figure 2.47: for Problem 2.3.51.
2.3.60 Proceeding as in Exercise 59, we find that this equation has no solutions.
2.3.61 We need to solve the matrix equation
1 2
0 1
3
2
๏ฃฎ
a
๏ฃฐ c
e
๏ฃน
b
1
๏ฃป
d =
0
f
0
1
,
which amounts to solving
the system a๏ฃน+ 2c + 3e = 0, c + 2e = 0, b + 2d + 3f = 0 and d + 2f = 1. The solutions
๏ฃฎ
e+1 f โ2
are of the form X = ๏ฃฐ โ2e 1 โ 2f ๏ฃป, where e, f are arbitrary constants.
e
f
2.3.62 The matrix equation
๏ฃฎ
๏ฃน
1 0
๏ฃฐ 2 1 ๏ฃป a
d
3 2
b
e
c
f
๏ฃฎ
1 0
=๏ฃฐ 0 1
0 0
๏ฃน
0
0 ๏ฃป
1
90
Copyright c 2013 Pearson Education, Inc.
Section 2.3
37 โฆ
H
J
and
HJ
and
H
J
37 โฆ
JH
Figure 2.48: for Problem 2.3.52.
has no solutions, since we have the inconsistent equations a = 1, 2a + d = 0 and 3a + 2d = 0.
2.3.63 The matrix equation
๏ฃฎ
๏ฃน
1 4
๏ฃฐ 2 5 ๏ฃป a
d
3 6
b
e
c
f
๏ฃฎ
๏ฃน
0
0 ๏ฃป
1
1 0
=๏ฃฐ 0 1
0 0
has no solutions, since we have the inconsistent equations a + 4d = 1, 2a + 5d = 0, and 3a + 6d = 0.
๏ฃฎ
e โ 5/3
2.3.64 Proceeding as in Exercise 61, we find X = ๏ฃฐ โ2e + 4/3
e
2.3.65 With X =
a
0
b
c
2
, we have to solve X =
0
can be arbitrary. The general solution is X =
0
๏ฃน
f + 2/3
โ2f โ 1/3 ๏ฃป, where e, f are arbitrary constants.
f
a2 ab + bc
0
c2
b
.
0
=
0 0
0 0
91
Copyright c 2013 Pearson Education, Inc.
. This means a = 0, c = 0 and b
Chapter 2
C
D
CD
C
D
DC
Figure 2.49: for Problem 2.3.53.
๏ฃฎ
๏ฃน
0 0
c 0 ๏ฃป then the diagonal entries of X 3 will be a3 , c3 , and f 3 . Since we want X 3 = 0, we must
e f
๏ฃฎ
๏ฃน
0 0 0
have a = c = f = 0. If X = ๏ฃฐ b 0 0 ๏ฃป, then a direct computation shows that X 3 = 0. Thus the solutions
d๏ฃน e 0
๏ฃฎ
0 0 0
are of the form X = ๏ฃฐ b 0 0 ๏ฃป, where b, d, e are arbitrary.
d e 0
a
2.3.66 If X = ๏ฃฐ b
d
2.3.67 a.
1
1 1
๏ฃฎ
a
๏ฃฐ b
c
d
e
f
๏ฃน
g
h ๏ฃป= a+b+c
i
d+e+f
g+h+i
=
1 1
1 , by Definition 2.1.4.
b. For an n ร n matrix A with nonnegative entries, the following statements are equivalent:
1P 1 … 1 A = 1 1 … 1
n
โ i=1 1aij = 1 for all j = 1, …, n
โ A is a transition matrix
92
Copyright c 2013 Pearson Education, Inc.
Section 2.3
B
E
BE
B
E
EB
Figure 2.50: for Problem 2.3.54.
1 1 … 1 A = 1 1 … 1 and 1 1 … 1 B = 1 1 … 1 .
2.3.68 From
Exercise 67b
we know
that
Now 1 1 … 1 AB = ( 1 1 … 1 A)B= 1 1 … 1 B = 1 1 … 1 , so that AB is a transition matrix, again by Exercise 67b (note that all entries of AB will be nonnegative).
2.3.69 a. It means that 25% of the surfers who are on page 1 initially will find themselves on page 3 after following
two links.
b. The ijth entry of A2 is 0 if it is impossible to get from page j to page i by following two consecutive links.
This means that there is no path of length 2 in the graph of the mini-Web from point j to point i.
2.3.70 a.
๏ฃฎ
0
๏ฃฏ 5/8
3
A =๏ฃฏ
๏ฃฐ 1/8
1/4
1/8
1/2
1/8
1/4
1/2
0
1/2
0
๏ฃน
0
1/4 ๏ฃบ
๏ฃบ
1/4 ๏ฃป
1/2
b. It means that 25% of the surfers who are on page 1 initially will find themselves on page 4 after following
three links.
93
Copyright c 2013 Pearson Education, Inc.
Chapter 2
c. The ijth entry of A3 is 0 if it is impossible to get from page j to page i by following three consecutive links.
This means that there is no path of length 3 in the graph of the mini-Web from point j to point i.
d. There are two paths, 1 โ 2 โ 1 โ 2 and 1 โ 3 โ 4 โ 2. Of the surfers who are on page 1 initially,
1
1
1
1
1
1
2 ร 2 ร 2 = 8 = 12.5% will follow the path 1 โ 2 โ 1 โ 2, while 2 ร 1 ร 1 = 2 = 50% will follow the path
1 โ 3 โ 4 โ 2.
2.3.71 We compute
๏ฃฎ
5/16
๏ฃฏ
1/4
A4 = ๏ฃฏ
๏ฃฐ 5/16
1/8
1/4
5/16
1/16
1/8
๏ฃฎ
5/32
1/4
9/32
5/16
0
1/4
1/4
1/2
๏ฃน
1/8
1/2 ๏ฃบ
๏ฃบ.
1/8 ๏ฃป
1/4
We see that it is impossible to get from page 3 to page 1 by following four consecutive links.
2.3.72 We compute
1/8
๏ฃฏ
9/32
A5 = ๏ฃฏ
๏ฃฐ 9/32
5/16
1/8
1/2
1/8
1/4
๏ฃน
1/4
5/16 ๏ฃบ
๏ฃบ.
5/16 ๏ฃป
1/8
Considering the matrix A4 we found in Exercise 71, we see that 5 is the smallest positive integer m such that
all entries of Am are positive. A surfer can get from any page j to any page i by following five consecutive links.
Equivalently, there is a path of length five in the mini-Web from any point j to any point i.
2.3.73
lim (Am ~x) =
mโโ
๏ฃฎ
lim Am ~x = ๏ฃฐ ~xequ
mโโ
~xequ
๏ฃฎ
๏ฃน
x1
๏ฃฏ x2 ๏ฃบ
๏ฃบ
… ~xequ ๏ฃป ๏ฃฏ
๏ฃฐ … ๏ฃป = (x1 + x2 + … + xn )~xequ = ~xequ .
xn
๏ฃน
Note that x1 + x2 + … + xn = 1 since ~x is a transition vector.
2.3.74 The transition
matrix
AB is not necessarily
positive. Consider the case where A has a row of zeros, for
1 1
1/2 1/2
1 1
example, A =
and B =
, with AB =
.
0 0
1/2 1/2
0 0
The matrix BA, on the other hand, must be positive. Each entry of BA is the dot product w
~ ยท ~v of a row w
~ of B
with a column ~v of A. All components of w
~ are positive, all components of ~v are nonnegative, and at least one
component vi of ~v is positive (since ~v is a distribution vector). Thus w
~ ยท ~v โฅ wi vi > 0 , showing that all entries of
BA are positive as claimed.
2.3.75 Each entry of Am+1 = Am A is the dot product w
~ ยท ~v of a row wof
~ Am with a column ~v of A. All components of
m
w
~ are positive (since A is positive) , all components of ~v are nonnegative (since A is a transition matrix), and at
least one component vi of ~v is positive (since ~v is a distribution vector). Thus w
~ ยท ~v โฅ wi vi > 0, showing that all
entries of Am+1 are positive as claimed.
๏ฃฎ
0 1/2
๏ฃฏ 1 0
2.3.76 A = ๏ฃฏ
๏ฃฐ 0 0
0 1/2
0
1
0
0
๏ฃฎ
๏ฃน
0.2002
0
๏ฃฏ 0.4004
0 ๏ฃบ
20
๏ฃบ and A โ ๏ฃฏ
๏ฃฐ 0.1992
1 ๏ฃป
0.2002
0
0.2002
0.3994
0.2002
0.2002
0.2002
0.4004
0.1992
0.2002
๏ฃน
0.1992
0.4004 ๏ฃบ
๏ฃบ
0.2012 ๏ฃป
0.1992
94
Copyright c 2013 Pearson Education, Inc.
Section 2.3
๏ฃน
๏ฃน ๏ฃฎ
๏ฃฎ
๏ฃน
0.2
0.2
0.2
๏ฃบ
๏ฃบ ๏ฃฏ
๏ฃฏ
๏ฃฏ 0.4 ๏ฃบ
๏ฃบ . We can verify that A ๏ฃฏ 0.4 ๏ฃบ = ๏ฃฏ 0.4 ๏ฃบ.
suggest ~xequ = ๏ฃฏ
๏ฃฐ 0.2 ๏ฃป ๏ฃฐ 0.2 ๏ฃป
๏ฃฐ 0.2 ๏ฃป
0.2
0.2
0.2
๏ฃฎ
Page 2 has the highest naive PageRank.
๏ฃฎ
๏ฃน
๏ฃน
0.5
0.5003 0.4985 0.5000
2.3.77 A10 โ ๏ฃฐ 0.0996 0.1026 0.0999 ๏ฃป suggests ~xequ = ๏ฃฐ 0.1 ๏ฃป.
0.4
0.4002 0.3989 0.4001
๏ฃฎ
๏ฃน ๏ฃฎ
๏ฃน
0.5
0.5
We can verify that A ๏ฃฐ 0.1 ๏ฃป = ๏ฃฐ 0.1 ๏ฃป.
0.4
0.4
๏ฃฎ
๏ฃฎ
๏ฃน
๏ฃฎ
0.1785 0.1786 0.1787
5
๏ฃฏ 9
0.3214 0.3216 0.3213 ๏ฃบ
1
๏ฃบโ ๏ฃฏ
0.2500 0.2499 0.2501 ๏ฃป 28 ๏ฃฐ 7
0.2501 0.2499 0.2499
7
๏ฃน
๏ฃน ๏ฃฎ
5/28
5/28
๏ฃฏ 9/28 ๏ฃบ ๏ฃฏ 9/28 ๏ฃบ
๏ฃบ
๏ฃบ ๏ฃฏ
We can verify that B ๏ฃฏ
๏ฃฐ 1/4 ๏ฃป = ๏ฃฐ 1/4 ๏ฃป.
1/4
1/4
0.1785
๏ฃฏ 0.3214
15
2.3.78 B โ ๏ฃฏ
๏ฃฐ 0.2500
0.2501
๏ฃฎ
5
9
7
7
5
9
7
7
๏ฃฎ
๏ฃน
๏ฃน ๏ฃฎ
๏ฃน
5/28
5/28
5
๏ฃฏ
๏ฃบ
๏ฃบ ๏ฃฏ
9 ๏ฃบ
๏ฃบ suggests ~xequ = ๏ฃฏ 9/28 ๏ฃบ = ๏ฃฏ 9/28 ๏ฃบ.
๏ฃฐ
๏ฃฐ
๏ฃป
๏ฃป
1/4 ๏ฃป
7/28
7
1/4
7/28
7
2.3.79 An extreme example is the identity matrix In , where In ~x = ~x for all distribution vectors ~x.
2.3.80 One example is the reflection matrix A =
for odd m.
0
1
1
0
m
, where A
=
1
0
0
1
m
for even m and A
=
0 1
1 0
2.3.81 If A~v = 5~v , then A2~v = A(A~v ) = A(5~v ) = 5A~v = 52~v . We can show by induction on m that Am~v = 5m~v .
Indeed, Am+1~v = A (Am~v ) |{z}
= A (5m~v ) = 5m A~v = 5m 5~v = 5m+1~v . In step 2 we have used the induction hypothesis.
step2
1
2
=
2.3.82 a. A
1
1
= 13
b. ~x =
0
2
1
2
and A
1
+ 32
โ1
1
โ1
1
= 10
1
โ1
.
1
1
1
1
= 101m
and Am
for all positive integers m.
=
c. Using Exercise 81, we find that Am
โ1
2
2
โ1
1
1
1
1
1
1
2
+ 32 Am
= 13
.
+ 32
= 31 Am
+ 3ยท10
Now Am ~x = Am 31
m
2
โ1
โ1
2
โ1
2
1
1
1
1/3
1/3
1/3
1
1
1
m
d. lim (A ~x) = lim 3
=3
+ 2ยท10m
=
. We can verify that A
=
,
โ1
2
2
2/3
2/3
2/3
mโโ
mโโ
1/3
so that
is indeed the equilibrium distribution for A.
2/3
2.3.83 Pick a positive number m such that Am is a positive transition matrix; note that Am ~x = ~x. The equation
Am ~x = ~x implies that the jth component xj of ~x is the dot product w
~ ยท ~x, where w
~ is the jth row of Am . All
95
Copyright c 2013 Pearson Education, Inc.
Chapter 2
components of w
~ are positive, all components of ~x are nonnegative, and at least one component xi of ~x is positive
(since ~x is a distribution vector). Thus xj = w
~ ยท ~x โฅ wi xi > 0 , showing that all components of ~x are positive as
claimed.
2.3.84 Let ~v1 , . . . , ~vn be the columns of the matrix X. Solving the matrix equation AX = In amounts to solving the
linear systems A~vi = ~ei for i = 1, . . . , n. Since A is a n ร m matrix of rank n, all these systems are consistent, so
that the matrix equation AX = In does have at least one solution. If n i.
Applying these operations to In , you end up with an upper triangular matrix.
d As in part (b): if all diagonal entries are nonzero.
2.4.36 If a matrix A can be transformed into B by elementary row operations, then A is invertible if (and only if)
B is invertible. The claim now follows from Exercise 35, where we show that a triangular matrix is invertible if
(and only if) its diagonal entries are nonzero.
2.4.37 Make an attempt to solve the linear equation ~y = (cA)~x = c(A~x) for ~x:
A~x = 1c ~y , so that ~x = Aโ1 1c ~y = 1c Aโ1 ~y .
This shows that cA is indeed invertible, with (cA)โ1 = 1c Aโ1 .
1
2.4.38 Use Theorem 2.4.9; Aโ1 = โ1
1
โ1 โk
=
0
0
1
k
(= A).
โ1
100
Copyright c 2013 Pearson Education, Inc.
Section 2.4
2.4.39 Suppose the ijth entry of M is k, and all other entries are as in the identity matrix. Then we can find
.
rref[M ..I ] by subtracting k times the jth row from the ith row. Therefore, M is indeed invertible, and M โ1
n
differs from the identity matrix only at the ijth entry; that entry is โk. (See Figure 2.52.)
Figure 2.52: for Problem 2.3.39.
2.4.40 If you apply an elementary row operation to a matrix with two equal columns, then the resulting matrix
will also have two equal columns. Therefore, rref(A) has two equal columns, so that rref(A) 6= In . Now use
Theorem 2.4.3.
2.4.41 a Invertible: the transformation is its own inverse.
b Not invertible: the equation T (~x) = ~b has infinitely many solutions if ~b is on the plane, and none otherwise.
c Invertible: The inverse is a scaling by 51 (that is, a contraction by 5). If ~y = 5~x, then ~x = 51 ~y .
d Invertible: The inverse is a rotation about the same axis through the same angle in the opposite direction.
2.4.42 Permutation matrices are invertible since they row reduce to In in an obvious way, just by row swaps. The
.
.
inverse of a permutation matrix A is also a permutation matrix since rref[A..In ] = [In ..Aโ1 ] is obtained from
.
[A..In ] by a sequence of row swaps.
2.4.43 We make an attempt to solve the equation ~y = A(B~x) for ~x:
B~x = Aโ1 ~y , so that ~x = B โ1 (Aโ1 ~y ).
๏ฃฎ
1
๏ฃฏ0
2.4.44 a rref(M4 ) = ๏ฃฐ
0
0
๏ฃน
0 โ1 โ2
1
2
3๏ฃบ
๏ฃป, so that rank(M4 ) = 2.
0
0
0
0
0
0
b To simplify the notation, we introduce the row vectors ~v = [1 1 . . . 1] and w
~ = [0 n 2n . . . (n โ 1)n] with n
components.
๏ฃฎ
๏ฃน
~v + w
~
~ ๏ฃบ โ2(I)
๏ฃฏ 2~v + w
Then we can write Mn in terms of its rows as Mn = ๏ฃฐ
.
๏ฃป
…
ยทยทยท
n~v + w
~ โn(I)
101
Copyright c 2013 Pearson Education, Inc.
Chapter 2
๏ฃน
~v + w
~
โw
~
๏ฃบ
๏ฃฏ
๏ฃบ
๏ฃฏ
~
Applying the Gauss-Jordan algorithm to the first column we get ๏ฃฏ โ2w
๏ฃบ.
๏ฃป
๏ฃฐ
…
โ(n โ 1)w
~
๏ฃฎ
All the rows below the second are scalar multiples of the second; therefore, rank(Mn ) = 2.
c By part (b), the matrix Mn is invertible only if n = 2.
2.4.45 a Each of the three row divisions requires three multiplicative operations, and each of the six row subtractions
requires three multiplicative operations as well; altogether, we have 3 ยท 3 + 6 ยท 3 = 9 ยท 3 = 33 = 27 operations.
.
b Suppose we have already taken care of the first m columns: [A..In ] has been reduced the matrix in Figure 2.53.
Figure 2.53: for Problem 2.3.45b.
Here, the stars represent arbitrary entries.
Suppose the (m+1)th entry on the diagonal is k. Dividing the (m+1)th
row by k requires n operations: nโmโ1
to the left of the dotted line not counting the computation kk = 1 , and m + 1 to the right of the dotted line
including k1 . Now the matrix has the form shown in Figure 2.54.
Figure 2.54: for Problem 2.4.45b.
Eliminating each of the other n โ 1 components of the (m + 1)th column now requires n multiplicative operations
(n โ m โ 1 to the left of the dotted line, and m + 1 to the right). Altogether, it requires n + (n โ 1)n = n2
operations to process the mth column. To process all n columns requires n ยท n2 = n3 operations.
102
Copyright c 2013 Pearson Education, Inc.
Section 2.4
c The inversion of a 12 ร 12 matrix requires 123 = 43 33 = 64 ยท 33 operations, that is, 64 times as much as the
inversion of a 3 ร 3 matrix. If the inversion of a 3 ร 3 matrix takes one second, then the inversion of a 12 ร 12
matrix takes 64 seconds.
2.4.46 Computing Aโ1~b requires n3 + n2 operations: First, we need n3 operations to find Aโ1 (see Exercise 45b)
and then n2 operations to compute Aโ1~b (n multiplications for each component).
.
How many operations are required to perform Gauss-Jordan eliminations on [A..~b]? Let us count these operations
โcolumn by column.โ If m columns of the coefficient matrix are left, then processing the next column requires
nm operations (compare with Exercise 45b). To process all the columns requires
3
2
n ยท n + n(n โ 1) + ยท ยท ยท + n ยท 2 + n ยท 1 = n(n + n โ 1 + ยท ยท ยท + 2 + 1) = n n(n+1)
= n +n
operations.
2
2
only half of what was required to compute Aโ1~b.
We mention in passing that one can reduce the number of operations further (by about 50% for large matrices)
by performing the steps of the row reduction in a different order.
2.4.47 Let f (x) = x2 ; the equation f (x) = 0 has the unique solution x = 0.
2.4.48 Consider the linear system A~x = ~0. The equation A~x = ~0 implies that BA~x = ~0, so ~x = ~0 since BA = Im .
Thus the system A~x = ~0 has the unique solution ~x = ~0. This implies m โค n, by Theorem 1.3,3. Likewise the
linear system B~y = ~0 has the unique solution ~y = ~0, implying that n โค m. It follows that n = m, as claimed.
๏ฃฎ
๏ฃฎ
๏ฃน
0.293
0
0
0.707
2.4.49 a A = ๏ฃฐ 0.014 0.207 0.017 ๏ฃป , I3 โ A = ๏ฃฐ โ0.014
0.044
0.01 0.216
โ0.044
๏ฃฎ
๏ฃน
1.41
0
0
(I3 โ A)โ1 = ๏ฃฐ 0.0267 1.26
0.0274 ๏ฃป
0.0797 0.0161 1.28
0
0.793
โ0.01
๏ฃน
0
โ0.017 ๏ฃป
0.784
๏ฃฎ
๏ฃน
๏ฃฎ ๏ฃน
1.41
1
b We have ~b = ๏ฃฐ 0 ๏ฃป, so that ~x = (I3 โ A)โ1~e1 = first column of (I3 โ A)โ1 โ ๏ฃฐ 0.0267 ๏ฃป.
0.0797
0
c As illustrated in part (b), the ith column of (I3 โ A)โ1 gives the output vector required to satisfy a consumer
demand of 1 unit on industry i, in the absence of any other consumer demands. In particular, the ith diagonal
entry of (I3 โ A)โ1 gives the output of industry i required to satisfy this demand. Since industry i has to satisfy
the consumer demand of 1 as well as the interindustry demand, its total output will be at least 1.
d Suppose the consumer demand increases from ~b to ~b + ~e2 (that is, the demand on manufacturing increases by
one unit). Then the output must change from (I3 โ A)โ1~b to
(I3 โ A)โ1 (~v + ~e2 ) = (I3 โ A)โ1~b + (I3 โ A)โ1~e2 = (I3 โ A)โ1~b+ (second column of (I3 โ A)โ1 ).
The components of the second column of (I3 โA)โ1 tells us by how much each industry has to increase its output.
e The ijth entry of (In โ A)โ1 gives the required increase of the output xi of industry i to satisfy an increase of
the consumer demand bj on industry j by one unit. In the language of multivariable calculus, this quantity is
โxi
โbj .
103
Copyright c 2013 Pearson Education, Inc.
Chapter 2
1
.
2.4.50 Recall that 1 + k + k 2 + ยท ยท ยท = 1โk
1
The top left entry of I3 โ A is I โ k, and the top left entry of (I3 โ A)โ1 will therefore be 1โk
, as claimed:
๏ฃฎ
๏ฃฏ1 โ k
๏ฃฏ
๏ฃฐ โ
โ
0 0
โ โ
โ โ
..
.
..
.
..
.
1
0
0
.
0 ..
.
โ ..
.
โ ..
๏ฃน
๏ฃฎ
0 0 ๏ฃบ รท(1 โ k) ๏ฃฏ 1 0
๏ฃบ โโ ๏ฃฏ
1 0๏ฃป
๏ฃฐโ โ
0 1
โ โ
1
1โk
โ . . . (first row will remain unchanged).
0
0
0 0
๏ฃน
๏ฃบ
๏ฃบ
1 0๏ฃป
0 1
In terms of economics, we can explain this fact as follows: The top left entry of (I3 โ A)โ1 is the output of
industry 1 (Agriculture) required to satisfy a consumer demand of 1 unit on industry 1. Producting this one unit
to satisfy the consumer demand will generate an extra demand of k = 0.293 units on industry 1. Producting
these k units in turn will generate an extra demand of k ยท k = k 2 units, and so forth. We are faced with an infinite
series of (ever smaller) demands, 1 + k + k 2 + ยท ยท ยท .
2.4.51 a Since rank(A)< n, the matrix E =rref(A) will not have a leading one in the last row, and all entries in the
last row of E will be zero.
๏ฃฎ ๏ฃน
0
๏ฃฏ0๏ฃบ
๏ฃฏ.๏ฃบ
.๏ฃบ
Let ~c = ๏ฃฏ
๏ฃฏ . ๏ฃบ. Then the last equation of the system E~x = ~c reads 0 = 1, so this system is inconsistent.
๏ฃฐ0๏ฃป
1
.
Now, we can โrebuildโ ~b from ~c by performing the reverse row-operations in the opposite order on E ..~c until
.
we reach A..~b . Since E~x = ~c is inconsistent, A~x = ~b is inconsistent as well.
b Since rank(A)โค min(n, m), and m < n, rank(A) < n also. Thus, by part a, there is a ~b such that A~x = ~b is
inconsistent.
๏ฃฎ
๏ฃฎ ๏ฃน
0
0
๏ฃฏ
๏ฃฏ0
..
0
๏ฃฏ
๏ฃบ
2.4.52 Let ~b = ๏ฃฐ ๏ฃป. Then A.~b = ๏ฃฏ
๏ฃฏ
1
๏ฃฐ0
0
1
an inconsistency in the third row.
2.4.53 a A โ ฮปI2 =
3โฮป
3
1 2
2 4
3 6
4 8
..
.
..
.
..
.
..
.
0
๏ฃน
๏ฃฎ
1 0
๏ฃบ
๏ฃฏ
๏ฃฏ
.
0๏ฃบ
๏ฃบ. We find that rref A..~b = ๏ฃฏ 0 1
๏ฃบ
๏ฃฏ
๏ฃฐ0 0
1๏ฃป
0
1
.
5โฮป
This fails to be invertible when (3 โ ฮป)(5 โ ฮป) โ 3 = 0,
or 15 โ 8ฮป + ฮป2 โ 3 = 0,
or 12 โ 8ฮป + ฮป2 = 0
or (6 โ ฮป)(2 โ ฮป) = 0. So ฮป = 6 or ฮป = 2.
104
Copyright c 2013 Pearson Education, Inc.
0 0
.
0 ..
.
2 ..
.
0 ..
.
0 ..
0
๏ฃน
๏ฃบ
0๏ฃบ
๏ฃบ , which has
๏ฃบ
1๏ฃป
0
Section 2.4
โ3 1
.
b For ฮป = 6, A โ ฮปI2 =
3 โ1
The system (A โ 6I2 )~x = ~0 has the solutions
For ฮป = 2, A โ ฮปI2 =
1
t
, for example.
, where t is an arbitrary constant. Pick ~x =
3
3t
1 1
.
3 3
The system (A โ 2I2 )~x = ~0 has the solutions
example.
3
c For ฮป = 6, A~x =
3
3
For ฮป = 2, A~x =
3
t
1
, where t is an arbitrary constant. Pick ~x =
, for
โt
โ1
1
6
1
=
=6
.
3
18
3
1
1
2
1
=
=2
.
5
โ1
โ2
โ1
1
5
1โฮป
10
2.4.54 A โ ฮปI2 =
. This fails to be invertible when det(A โ ฮปI2 ) = 0,
โ3
12 โ ฮป
so 0 = (1 โ ฮป)(12 โ ฮป) + 30 = 12 โ 13ฮป + ฮป2 + 30 = ฮป2 โ 13ฮป + 42 = (ฮป โ 6)(ฮป โ 7). In order for this to be zero,
ฮป must be 6 or 7.
โ5 10
. We solve the system (A โ 6I2 ) ~x = ~0 and find that the solutions are of the
If ฮป = 6, then A โ 6I2 =
โ3 6
2t
2
form ~x =
. For example, when t = 1, we find ~x =
.
t
1
โ6 10
If ฮป = 7, then A โ 7I2 =
. Here we solve the system (A โ 7I2 ) ~x = ~0, this time finding that our
โ3
5
5
5t
.
. For example, for t = 1, we find ~x =
solutions are of the form ~x =
3
3t
1/2
0
. The linear transformation defined by A is a
0
1/2
โ1
scaling by a factor 2 and
A defines a scaling by 1/2. The determinant of A is the area of the square spanned
0
2
. The angle ฮธ from ~v to w
~ is ฯ/2. (See Figure 2.55.)
and w
~=
by ~v =
2
0
2.4.55 The determinant of A is equal to 4 and Aโ1 =
cos(ฮฑ) sin(ฮฑ)
. The linear
โ sin(ฮฑ) cos(ฮฑ)
transformation defined by A is a rotation by angle ฮฑ in the counterclockwise direction. The inverse represents a
rotation by the angle
ฮฑ in the
clockwisedirection. The determinant of A is the area of the unit square spanned
cos(ฮฑ)
โ sin(ฮฑ)
by ~v =
and ~v =
. The angle ฮธ from ~v to w
~ is ฯ/2. (See Figure 2.56.)
sin(ฮฑ)
cos(ฮฑ)
2.4.56 The determinant of A is 1. The matrix is invertible with inverse Aโ1 =
2.4.57 The determinant of A is โ1. Matrix
A is invertible,
with Aโ1 = A. Matrices A and Aโ1 define reflection
cos(ฮฑ/2)
about the line spanned by the ~v =
. The absolute value of the determinant of A is the area of the
sin(ฮฑ/2)
105
Copyright c 2013 Pearson Education, Inc.
Chapter 2
w=
0
2
ฮธ= ฯ
2
v=
2
0
Figure 2.55: for Problem 2.4.55.
ฮธ= ฯ
2
w=
โ sin ฮฑ
cos ฮฑ
v=
cos ฮฑ
sin ฮฑ
ฮฑ
Figure 2.56: for Problem 2.4.56.
unit square spanned by ~v =
2.57.)
cos(ฮฑ)
sin(ฮฑ)
and w
~ =
sin(ฮฑ)
โ cos(ฮฑ)
. The angle ฮธ from ~v to w
~ is โฯ/2. (See Figure
โ3โ1
0
. The linear
0
โ3โ1
transformation defined by A is a reflection about the origin combined with a scaling by a factor 3. The inverse
defines a reflection about
combined
with
the origin
a scaling by a factor 1/3. The determinant is the area of the
โ3
0
square spanned by ~v =
and w
~=
. The angle ฮธ from ~v to w
~ is ฯ/2. (See Figure 2.58.)
0
โ3
2.4.58 The determinant of A is 9. The matrix is invertible with inverse Aโ1 =
0.6 0.8
. The matrix A
โ0.8 0.6
represents the rotation through the angle ฮฑ = arccos(0.6). Its inverse represents a rotation by thesame angle
0.6
in the clockwise direction. The determinant of A is the area of the unit square spanned by ~v =
and
0.8
2.4.59 The determinant of A is 1. The matrix A is invertible with inverse Aโ1 =
106
Copyright c 2013 Pearson Education, Inc.
Section 2.4
cos ฮฑ
sin ฮฑ
v=
ฮธ=โ ฯ
2
w=
sin ฮฑ
โ cos ฮฑ
Figure 2.57: for Problem 2.4.57.
v=
โ3
0
ฮธ= ฯ
2
w=
0
โ3
Figure 2.58: for Problem 2.4.58.
w
~=
โ0.8
0.6
. The angle ฮธ from ~v to w
~ is ฯ/2. (See Figure 2.59.)
โ1
โ1
2.4.60 The determinant of A is โ1. The matrix A is invertible
with inverse A = A. Matrices A and A define
cos(ฮฑ/2)
, where ฮฑ = arccos(โ0.8). The absolute value of the
the reflection about the line spanned by ~v =
sin(ฮฑ/2)
โ0.8
0.6
determinant of A is the area of the unit square spanned by ~v =
and w
~=
. The angle ฮธ from v
0.6
0.8
to w is โฯ/2. (See Figure 2.60.)
1 โ1
2.4.61 The determinant of A is 2 and A
. The matrix A represents a rotation through the angle
1 1
โ
โ
โฯ/4 combined with scaling by 2. describes
through
a rotation
ฯ/4
and scaling by 1/ 2. The determinant of
โ
1
1
A is the area of the square spanned by ~v =
and w
~ =
with side length 2. The angle ฮธ from ~v to
โ1
1
w
~ is ฯ/2. (See Figure 2.61.)
โ1
= 21
107
Copyright c 2013 Pearson Education, Inc.
Chapter 2
ฮธ= ฯ
2
w=
v=
0.6
0.8
โ0.8
0.6
Figure 2.59: for Problem 2.4.59.
ฮธ=โ ฯ
2
v=
w=
0.6
0.8
โ0.8
0.6
Figure 2.60: for Problem 2.4.60.
w=
1
1
ฮธ= ฯ
2
v=
1
โ1
Figure 2.61: for Problem 2.4.61.
108
Copyright c 2013 Pearson Education, Inc.
Section 2.4
1 1
0 1
. Both A and Aโ1 represent horizontal shears. The
1
โ1
determinant of A is the area of the parallelogram spanned by ~v =
and w
~ =
. The angle from
0
1
~v to w
~ is 3ฯ/4. (See Figure 2.62.)
โ1
2.4.62 The determinant of A is 1 and A
w=
โ1
1
=
ฮธ= 3ฯ
4
v=
1
0
Figure 2.62: for Problem 2.4.62.
โ3 4
1
A. The matrix A represents a reflection
= 25
4 3
about a line combined with a scaling by 5 whilc Aโ1 represents a reflection about the same line combined
with
โ3
a scaling by 1/5. The absolute value of the determinant of A is the area of the square spanned by ~v =
4
4
and w
~=
with side length 5. The angle from ~v to w
~ is โฯ/2. (See Figure 2.63.)
3
2.4.63 The determinant of A is โ25 and Aโ1 = (1/25)
v=
โ3
4
ฮธ=โ ฯ
2
w=
4
3
Figure 2.63: for Problem 2.4.63.
2.4.64 The determinant of A is 25. The matrix A is a rotation dilation matrix with scaling
factor 5 and rotation by
3 โ4
โ1
an angle arccos(0.6) in the clockwise direction. The inverse A = (1/25)
is a rotation dilation too
4 3
109
Copyright c 2013 Pearson Education, Inc.
Chapter 2
with a scaling factor
1/5
and rotation
angle arccos(0.6). The determinant of A is the area of the parallelogram
3
4
spanned by ~v =
and w
~=
with side length 5. The angle from ~v to w
~ is ฯ/2. (See Figure 2.64.)
โ4
3
4
3
w=
ฮธ= ฯ
2
v=
3
โ4
Figure 2.64: for Problem 2.4.64.
2.4.65 The determinant of A is 1 and Aโ1 =
1
โ1
0
1
. Both A and Aโ1 represent vertical shears. The determinant
0
1
. The angle from ~v to w
~ is ฯ/4. (See
and w
~=
of A is the area of the parallelogram spanned by ~v =
1
1
Figure 2.65.)
w=
0
1
ฮธ= ฯ
4
v=
1
1
Figure 2.65: for Problem 2.4.65.
2.4.66 We can write AB(AB)โ1 = A(B(AB)โ1 ) = In and (AB)โ1 AB = ((AB)โ1 A)B = In .
By Theorem 2.4.8, A and B are invertible.
110
Copyright c 2013 Pearson Education, Inc.
Section 2.4
2.4.67 Not necessarily true; (A + B)2 = (A + B)(A + B) = A2 + AB + BA + B 2 6= A2 + 2AB + B 2 if AB 6= BA.
2.4.68 Not necessarily true; (A โ B)(A + B) = A2 + AB โ BA โ B 2 6= A2 โ B 2 if AB 6= BA.
2.4.69 Not necessarily true; consider the case A = In and B = โIn .
2.4.70 True; apply Theorem 2.4.7 to B = A.
2.4.71 True; ABB โ1 Aโ1 = AIn Aโ1 = AAโ1 = In .
2.4.72 Not necessarily true; the equation ABAโ1 = B is equivalent to AB = BA (multiply by A from the right),
which is not true in general.
2.4.73 True; (ABAโ1 )3 = ABAโ1 ABAโ1 ABAโ1 = AB 3 Aโ1 .
2.4.74 True; (In + A)(In + Aโ1 ) = In2 + A + Aโ1 + AAโ1 = 2In + A + Aโ1 .
2.4.75 True; (Aโ1 B)โ1 = B โ1 (Aโ1 )โ1 = B โ1 A (use Theorem 2.4.7).
2.4.76 We want A such that A
1 2
2 1
2
=
, so that A =
2 5
1 3
1
1
3
1
2
2
5
โ1
=
8 โ3
.
โ1
1
2.4.77 We want A such that A~vi = w
~ i , for i = 1, 2, . . . , m, or A[~v1 ~v2 . . . ~vm ] = [w
~1 w
~2 . . . w
~ m ], or AS = B.
Multiplying by S โ1 from the right we find the unique solution A = BS โ1 .
1 2
2.4.78 Use the result of Exercise 2.4.77, with S =
2 5
33
A = BS โ1 = ๏ฃฐ 21
9
2.4.79 Use the result of Exercise 2.4.77, with S =
A = BS
โ1
= 15
3 1
1 2
6 3
;
2 6
and B =
3
.
16
9
โ2
T
๏ฃน
1
2 ๏ฃป;
3
7
and B = ๏ฃฐ 5
3
๏ฃน
โ13
โ 8๏ฃป
โ3
๏ฃฎ
๏ฃฎ
T
T
T
2.4.80 P0 โโ P1 , P1 โโ P3 , P2 โโ P2 , P3 โโ P0
L
L
L
L
P0 โโ P0 , P1 โโ P2 , P2 โโ P1 , P3 โโ P3
a. T โ1 is the rotation about the axis through 0 and P2 that transforms P3 into P1 .
b. Lโ1 = L
c. T 2 = T โ1 (See part (a).)
111
Copyright c 2013 Pearson Education, Inc.
Chapter 2
d. P0 โโ P1
T โฆL
P0 โโ P2
LโฆT
P1 โโ P2
P2 โโ P3
P3 โโ P0
P1 โโ P3
P2 โโ P1
P3 โโ P0
The transformations T โฆ L and L โฆ T are not the same.
e.
P0
P1
P2
P3
LโฆT โฆL
โโ P2
โโ P1
โโ P3
โโ P0
This is the rotation about the axis through 0 and P1 that sends P0 to P2 .
2.4.81 Let A be the matrix of T and C the matrix of L. We want that AP0 = P1 , AP1 = P3 , and AP2 = P2 . We
๏ฃฎ
๏ฃฎ
๏ฃน
๏ฃน
1
1 โ1
1 โ1 โ1
can use the result of Exercise 77, with S = ๏ฃฐ 1 โ1
1 ๏ฃป and B = ๏ฃฐ โ1 โ1
1 ๏ฃป.
1 โ1 โ1
โ1
1 โ1
๏ฃน
๏ฃฎ
0
0 1
0 0 ๏ฃป.
Then A = BS โ1 = ๏ฃฐ โ1
0 โ1 0
๏ฃน
๏ฃฎ
0 1 0
Using an analogous approach, we find that C = ๏ฃฐ 1 0 0 ๏ฃป.
0 0 1
๏ฃฎ
a
2.4.82 a EA = ๏ฃฐ d โ 3a
g
b
e โ 3b
h
๏ฃน
c
f โ 3c ๏ฃป
k
The matrix EA is obtained from A by an elementary row operation: subtract three times the first row from the
second.
๏ฃฎ
๏ฃน
a
b
c
๏ฃฏ
b EA = ๏ฃฐ 14 d
g
1
4e
1 ๏ฃบ
4f ๏ฃป
h
k
The matrix EA is obtained from A by dividing the second row of A by 4 (an elementary row operation).
๏ฃฎ
1 0
c If we set E = ๏ฃฐ 0 0
0 1
๏ฃฎ
๏ฃน
0
1 0
1 ๏ฃป then ๏ฃฐ 0 0
0
0 1
๏ฃน๏ฃฎ
0
a
1๏ฃป๏ฃฐd
0
g
๏ฃน ๏ฃฎ
b c
a
e f ๏ฃป = ๏ฃฐg
h k
d
b
h
e
๏ฃน
c
k ๏ฃป, as desired.
f
d An elementary n ร n matrix E has the same form as In except that either
โข eij = k(6= 0) for some i 6= j [as in part (a)], or
โข eii = k(6= 0, 1) for some i [as in part (b)], or
โข eij = eji = 1, eii = ejj = 0 for some i 6= j [as in part (c)].
112
Copyright c 2013 Pearson Education, Inc.
Section 2.4
2.4.83 Let E be an elementary n ร n matrix (obtained from In by a certain elementary row operation), and let F
be the elementary matrix obtained from In by the reversed row operation. Our work in Exercise 2.4.82 [parts (a)
through (c)] shows that EF = In , so that E is indeed invertible, and E โ1 = F is an elementary matrix as well.
2.4.84 a The matrix rref(A) is obtained from A by performing a sequence of p elementary row operations. By
Exercise 2.4.82 [parts (a) through (c)] each of these operations can be represented by the left multiplication with
an elementary matrix, so that rref(A) = E1 E2 . . . Ep A.
0
A=
1
b
rref(A) =
2
3
3
2
3
1
0
1
โ
1
0
โ
1
0
โ
1
0
swap rows 1 and 2, represented by
รท2
, represented by
โ3(II)
1
0
, represented by
1 0
1
Therefore, rref(A) =
=
0 1
0
โ3
1
0
1
2
0
1
1
0
1 โ3
0
1
1
0
0
1
2
0
1
1
0
0
1
2
= E1 E2 E3 A.
3
2.4.85 a Let S = E1 E2 . . . Ep in Exercise 2.4.84a.
By Exercise 2.4.83, the elementary matrices Ei are invertible: now use Theorem 2.4.7 repeatedly to see that S is
invertible.
b A=
1 2
4 8
2 4
4 8
รท2
, represented by
0
1
1 0
โ4 1
1
2
0
, represented by
โ4(I)
1 2
rref(A) =
0 0
1 2
1
Therefore, rref(A) =
=
0 0
โ4
1
1
0
0
1 0
2
2
=
.
S=
0 1
โ2 1
โ4 1
0
1
1
2
0
0
1
2
4
4
= E1 E2 A = SA, where
8
(There are other correct answers.)
2.4.86 a By Exercise 2.4.84a, In = rref(A) = E1 E2 . . . Ep A, for some elementary matrices E1 , . . . , Ep . By Exercise
2.4.83, the Ei are invertible and their inverses are elementary as well. Therefore,
113
Copyright c 2013 Pearson Education, Inc.
Chapter 2
A = (E1 E2 . . . Ep )โ1 = Epโ1 . . . E2โ1 E1โ1 expresses A as a product of elementary matrices.
b We can use out work in Exercise 2.4.84 b:
โ1
โ1
โ1
0 1
1 โ3
0 1
0 1
1 0
1 0
1 3
0 2
1 โ3
1 0
=
=
=
1 0
0
1
1 0
1 0
0 2
0 21
0 1
1 3
0
1
0 12
2.4.87
k
0
1
0
0
1
1 0
1 k
represents a vertical shear,
represents a horizontal shear,
k 1
0 1
0
represents a โscaling in ~e1 directionโ (leaving the ~e2 component unchanged),
1
0
represents a โscaling in ~e2 directionโ (leaving the ~e1 component unchanged), and
k
1
1
.
represents the reflection about the line spanned by
1
0
2.4.88 Performing a sequence of p elementary row operations on a matrix A amounts to multiplying A with
E1 E2 . . . Ep from the left, where the Ei are elementary matrices. If In = E1 E2 . . . Ep A, then E1 E2 . . . Ep = Aโ1 ,
so that
a. E1 E2 . . . Ep AB = B, and
b. E1 E2 . . . Ep In = Aโ1 .
2.4.89 Let A and B be two lower triangular nรn matrices. We need to show that the ijth entry of AB is 0 whenever
i < j.
This entry is the dot product of the ith row of A and the jth column of B,
๏ฃน
๏ฃฎ
0
๏ฃฏ .. ๏ฃบ
๏ฃฏ . ๏ฃบ
๏ฃบ
๏ฃฏ
๏ฃฏ 0 ๏ฃบ
๏ฃฏ
[ai1 ai2 . . . aii 0 . . . 0] ยท ๏ฃฏ b ๏ฃบ
๏ฃบ, which is indeed 0 if i < j.
๏ฃฏ jj ๏ฃบ
๏ฃฏ .. ๏ฃบ
๏ฃฐ . ๏ฃป
bnj
๏ฃฎ
1
2.4.90 a ๏ฃฐ 2
2
๏ฃฎ
1
๏ฃฐ0
0
โ
๏ฃฎ
๏ฃน
2 3
1 0
6 7 ๏ฃป โ2I , represented by ๏ฃฐ 0 1
2 4 โ2I
โ2 0
๏ฃฎ
๏ฃน
1
2
3
represented by ๏ฃฐ 0
2
1๏ฃป
0
โ2 โ2 +II
โ
๏ฃน๏ฃฎ
๏ฃน
0
1 0 0
0 ๏ฃป ๏ฃฐ โ2 1 0 ๏ฃป
1
0 0 1
๏ฃน
0 0
1 0๏ฃป
1 1
114
Copyright c 2013 Pearson Education, Inc.
Section 2.4
๏ฃฎ
1
๏ฃฐ0
0
๏ฃฎ
1
๏ฃฐ0
0
๏ฃน
2
3
2
1 ๏ฃป , so that
0 โ1
๏ฃน ๏ฃฎ
๏ฃน๏ฃฎ
2
3
1 0 0
1
2
1๏ฃป = ๏ฃฐ0 1 0๏ฃป๏ฃฐ 0
0 โ1
0 1 1
โ2
โ
U
โ
E3
๏ฃน๏ฃฎ
0 0
1
1 0 ๏ฃป ๏ฃฐ โ2
0 1
0
โ
E2
๏ฃฎ
1
b A = (E3 E2 E1 )โ1 U = E1โ1 E2โ1 E3โ1 U = ๏ฃฐ 2
0
๏ฃน๏ฃฎ
0 0
1 2
1 0๏ฃป๏ฃฐ2 6
0 1
2 2
โ
E1
๏ฃน๏ฃฎ
0 0
1
1 0๏ฃป๏ฃฐ0
0 1
2
โ
M1
1
c Let L = M1 M2 M3 in part (b); we compute L = ๏ฃฐ 2
2
1 2
Then ๏ฃฐ 2 6
2 2
โ
A
๏ฃน ๏ฃฎ
3
1
0
7๏ฃป = ๏ฃฐ2
1
4
2 โ1
โ
L
๏ฃน๏ฃฎ
0
1 2
0๏ฃป๏ฃฐ0 2
1
0 0
โ
U
๏ฃน
3
1๏ฃป
โ1
๏ฃน๏ฃฎ
0 0
1
0
1 0๏ฃป๏ฃฐ0
1
0 1
0 โ1
โ
M2
๏ฃฎ
๏ฃฎ
โ
A
๏ฃน
3
7๏ฃป
4
โ
M3
๏ฃน๏ฃฎ
๏ฃน
0
1 2
3
0๏ฃป๏ฃฐ0 2
1๏ฃป
1
0 0 โ1
โ
U
๏ฃน
0 0
1 0 ๏ฃป.
โ1 1
๏ฃฎ
1
d We can use the matrix L we found in part (c), but U needs to be modified. Let D = ๏ฃฐ 0
0
(Take the diagonal entries of the matrix U in part (c)).
๏ฃน๏ฃฎ
๏ฃน๏ฃฎ
๏ฃน ๏ฃฎ
๏ฃฎ
1 2
1 0
0
1
0 0
1 2 3
0๏ฃป๏ฃฐ0 1
1 0๏ฃป๏ฃฐ0 2
Then ๏ฃฐ 2 6 7 ๏ฃป = ๏ฃฐ 2
0 0
0 0 โ1
2 โ1 1
2 2 4
โ
A
โ
L
โ
D
โ
U
3
๏ฃน
1 ๏ฃป
2 .
1
2.4.91 a Write the system L~y = ~b in components:
๏ฃฎ
y1
๏ฃฏ โ3y1
๏ฃฐ
y1
โy1
+
+
+
y2
2y2
8y2
+
โ
y3
5y3
+ y4
๏ฃน
= โ3
= 14 ๏ฃบ
๏ฃป, so that y1 = โ3, y2 = 14 + 3y1 = 5,
= 9
= 33
y3 = 9 โ y1 โ 2y2 = 2, and y4 = 33 + y1 โ 8y2 + 5y3 = 0:
115
Copyright c 2013 Pearson Education, Inc.
๏ฃน
0
0
2
0 ๏ฃป.
0 โ1
Chapter 2
๏ฃน
โ3
๏ฃฏ 5๏ฃบ
~y = ๏ฃฐ
๏ฃป.
2
0
๏ฃฎ
๏ฃฎ
๏ฃน
1
๏ฃฏ โ1 ๏ฃบ
b Proceeding as in part (a) we find that ~x = ๏ฃฐ
๏ฃป.
2
0
a 0
d
2.4.92 We try to find matrices L =
and U =
b c
0
0 1
a 0
d e
ad
ae
=
=
.
1 0
b c
0 f
bd be + cf
e
f
such that
Note that the equations ad = 0, ae = 1, and bd = 1 cannot be solved simultaneously: If ad = 0 then a or d is 0
so that ae or bd is zero.
0 1
does not have an LU factorization.
Therefore, the matrix
1 0
(m)
U
U2
L(m) 0
.
and U =
2.4.93 a Write L =
0
U4
L3
L4
(m) (m)
L U
L(m) U2
Then A = LU =
, so that A(m) = L(m) U (m) , as claimed.
L3 U (m)
L3 U 2 + L4 U 4
b By Exercise 2.4.66, the matrices L and U are both invertible. By Exercise 2.4.35, the diagonal entries of L and
U are all nonzero. For any m, the matrices L(m) and U (m) are triangular, with nonzero diagonal entries, so that
they are invertible. By Theorem 2.4.7, the matrix A(m) = L(m) U (m) is invertible as well.
A(nโ1)
c Using the hint, we write A =
w
~
~v
k
Lโฒ
=
~x
0
t
Uโฒ
0
~y
.
s
We are looking for a column vector ~y , a row vector ~x, and scalars t and s satisfying these equations. The following
equations need to be satisfied: ~v = Lโฒ ~y , w
~ = ~xU โฒ , and k = ~x~y + ts.
We find that ~y = (Lโฒ )โ1~v , ~x = w(U
~ โฒ )โ1 , and ts = k โ w(U
~ โฒ )โ1 (Lโฒ )โ1~v .
We can choose, for example, s = 1 and t = k โ w(U
~ โฒ )โ1 (Lโฒ )โ1~v , proving that A does indeed have an LU
factorization.
Alternatively, one can show that if all principal submatrices are invertible then no row swaps are required in the
Gauss-Jordan Algorithm. In this case, we can find an LU -factorization as outlined in Exercise 2.4.90.
2.4.94 a If A = LU is an LU factorization, then the diagonal entries of L and U are nonzero (compare with Exercise
2.4.93). Let D1 and D2 be the diagonal matrices whose diagonal entries are the same as those of L and U ,
respectively.
Then A = (LD1โ1 )(D1 D2 )(D2โ1 U ) is the desired factorization
116
Copyright c 2013 Pearson Education, Inc.
Section 2.4
โ
new L
โ
D
โ
new U
(verify that LD1โ1 and D2โ1 U are of the required form).
b If A = L1 D1 U1 = L2 D2 U2 and A is invertible, then L1 , D1 , U1 , L2 , D2 , U2 are all invertible, so that we can
โ1
from the right:
multiply the above equation by D2โ1 Lโ1
2 from the left and by U1
โ1
D2โ1 Lโ1
2 L1 D 1 = U 2 U 1 .
Since products and inverses of upper triangular matrices are upper triangular (and likewise for lower triangular
โ1
is both upper and lower triangular, that is, it is diagonal. Since
matrices), the matrix D2โ1 Lโ1
2 L1 D 1 = U 2 U 1
the diagonal entries of U2 and U1 are all 1, so are the diagonal entries of U2 U1โ1 , that is U2 U1โ1 = In , and thus
U2 = U1 .
โ1
is diagonal. As above, we have in fact Lโ1
Now L1 D1 = L2 D2 , so that Lโ1
2 L1 = In and therefore
2 L1 = D 2 D 1
L2 = L1 .
2.4.95 Suppose A11 is a pรp matrix and A22 is a q รq matrix. For B to be the inverse of A we must have AB = Ip+q .
Let us partition B the same way as A:
B11 B12
B=
, where B11 is p ร p and B22 is q ร q.
B21 B22
Ip 0
A11 B11 A11 B12
A11
0
B11 B12
=
means that
=
Then AB =
A22 B21 A22 B22
0 Iq
B21 B22
0
A22
A11 B11 = Ip , A22 B22 = Iq , A11 B12 = 0, A22 B21 = 0.
โ1
This implies that A11 and A22 are invertible, and B11 = Aโ1
11 , B22 = A22 .
This in turn implies that B12 = 0 and B21 = 0.
We summarize: A is invertible if (and only if) both A11 and A22 are invertible; in this case
โ1
A11
0
โ1
A =
.
0
Aโ1
22
2.4.96 This exercise is very similar to Example 7 in the text. We outline the solution:
I
0
B11 B12
A11
0
means that
= p
0 Iq
B21 B22
A21 A22
A11 B11 = Iq , A11 B12 = 0, A21 B11 + A22 B21 = 0, A21 B12 + A22 B22 = Iq .
โ1
This implies that A11 is invertible, and B11 = Aโ1
11 . Multiplying the second equation with A11 , we conclude that
โ1
B12 = 0. Then the last equation simplifies to A22 B22 = Iq , so that B22 = A22 .
โ1
โ1
Finally, B21 = โAโ1
22 A21 B11 = โA22 A21 A11 .
We summarize: A is invertible if (and only if) both A11 and A22 are invertible. In this case,
Aโ1
0
11
Aโ1 =
โ1 .
โ1
โ1
โA22 A21 A11 A22
117
Copyright c 2013 Pearson Education, Inc.
Chapter 2
I
2.4.97 Suppose A11 is a p ร p matrix. Since A11 is invertible, rref(A) = p
0
A12
0
โ
, so that
rref(A23 )
rank(A) = p + rank(A23 ) = rank(A11 ) + rank(A23 ).
2.4.98 Try to find a matrix B =
AB =
In
w
~
~v
1
X
~y
X
~y
~x
t
X + ~v~y
~x
=
wX
~ + ~y
t
(where X is n ร n) such that
I
~x + t~v
= n
0
w~
~x + t
0
.
1
We want X + ~v~y = In , ~x + t~v = ~0, wX
~ + ~y = ~0, and w~
~ x + t = 1.
Substituting ~x = โt~v into the last equation we find โtw~
~ v + t = 1 or t(1 โ w~
~ v ) = 1.
This equation can be solved only if w~
~ v 6= 1, in which case t = 1โ1w~
v~y into the
~ v . Now substituting X = In โ ~
third equation, we find w
~ โ w~
~ v~y + ~y = ~0 or ~y = โ 1โ1w~
w
~
=
โt
w.
~
~v
I + t~v w
~ โt~v
We summarize: A is invertible if (and only if) w~
~ v 6= 1. In this case, Aโ1 = n
, where t = 1โ1w~
~v .
โtw
~
t
.
The same result can be found (perhaps more easily) by working with rref[A..In+1 ], rather than partitioned
matrices.
2.4.99 Multiplying both sides with Aโ1 we find that A = In : The identity matrix is the only invertible matrix with
this property.
2.4.100 Suppose the entries of A are all a, where a 6= 0. Then the entries of A2 are all na2 . The equation na2 = a
๏ฃฎ1 1
๏ฃน
ยท ยท ยท n1
n
n
๏ฃฏ 1 1 ยทยทยท 1 ๏ฃบ
๏ฃฏn n
n ๏ฃบ
1
๏ฃบ.
is satisfied if a = n . Thus the solution is A = ๏ฃฏ
..
๏ฃฏ
๏ฃบ
.
๏ฃฐ
๏ฃป
1
1
1
ยทยทยท n
n
n
2.4.101 The ijth entry of AB is
n
X
aik bkj .
k=1
Then
n
X
k=1
aik bkj โค
โ
since aik โค s
n
X
k=1
sbkj = s
n
X
bkj
k=1
!
โค sr.
โ
this is โค r, as it is the
j th column sum of B.
2.4.102 a We proceed by induction on m. Since the column sums of A are โค r, the entries of A1 = A are also โค r1 = r,
so that the claim holds for m = 1. Suppose the claim holds for some fixed m. Now write Am+1 = Am A; since
118
Copyright c 2013 Pearson Education, Inc.
Section 2.4
the entries of Am are โค rm and the column sums of A are โค r, we can conclude that the entries of Am+1 are
โค rm r = rm+1 , by Exercise 101.
b For a fixed i and j, let bm be the ijth entry of Am . In part (a) we have seen that 0 โค bm โค rm .
Note that limmโโ rm = 0 (since r < 1), so that limmโโ bm = 0 as well (this follows from what some calculus
texts call the โsqueeze theoremโ).
c For a fixed i and j, let cm be the ijth entry of the matrix In + A + A2 + ยท ยท ยท + Am . By part (a),
1
.
cm โค 1 + r + r2 + ยท ยท ยท + rm < 1โr
Since the cm form an increasing bounded sequence, limmโโ cm exists (this is a fundamental fact of calculus).
d (In โ A)(In + A + A2 + ยท ยท ยท + Am ) = In + A + A2 + ยท ยท ยท Am โ A โ A2 โ ยท ยท ยท โ Am โ Am+1
= In โ Am+1
Now let m go to infinity; use parts (b) and (c). (In โ A)(In + A + A2 + ยท ยท ยท + Am + ยท ยท ยท) = In , so that
(In โ A)โ1 = In + A + A2 + ยท ยท ยท + Am + ยท ยท ยท.
2.4.103 a The components of the jth column of the technology matrix A give the demands industry Jj makes on
the other industries, per unit output of Jj . The fact that the jth column sum is less than 1 means that industry
Jj adds value to the products it produces.
b A productive economy can satisfy any consumer demand ~b, since the equation
(In โ A)~x = ~b can be solved for the output vector ~x : ~x = (In โ A)โ1~b (compare with Exercise 2.4.49).
c The output ~x required to satisfy a consumer demand ~b is
~x = (In โ A)โ1~b = (In + A + A2 + ยท ยท ยท + Am + ยท ยท ยท) ~b = ~b + A~b + A2~b + ยท ยท ยท + Am~b + ยท ยท ยท.
To interpret the terms in this series, keep in mind that whatever output ~v the industries produce generates an
interindustry demand of A~v .
The industries first need to satisfy the consumer demand, ~b. Producing the output ~b will generate an interindustry
demand, A~b. Producing A~b in turn generates an extra interindustry demand, A(A~b) = A2~b, and so forth.
For a simple example, see Exercise 2.4.50; also read the discussion of โchains of interindustry demandsโ in the
footnote to Exercise 2.4.49.
2.4.104 a We write our three equations below:
I
L
S
๏ฃฎ
๏ฃฎ 1
= 13 R + 31 G + 13 B
3
๏ฃฏ
=RโG
, so that the matrix is P = ๏ฃฐ 1
= โ 12 R โ 12 G + B
โ 21
๏ฃฎ ๏ฃน
๏ฃน
๏ฃฎ
R
R
1
b ๏ฃฐ G ๏ฃป is transformed into ๏ฃฐ G ๏ฃป, with matrix A = ๏ฃฐ 0
B
0
0
1
3
โ1
โ 12
1
3
๏ฃน
๏ฃบ
0 ๏ฃป.
1
๏ฃน
0 0
1 0 ๏ฃป.
0 0
119
Copyright c 2013 Pearson Education, Inc.
Chapter 2
๏ฃฎ
๏ฃฏ
c This matrix is P A = ๏ฃฐ
1
3
1
3
1
โ1
โ 21
โ 12
๏ฃน
0
๏ฃบ
0 ๏ฃป (we apply first A, then P .)
0
Figure 2.66: for Problem 2.4.104d.
๏ฃฎ
๏ฃฏ
d See Figure 2.66. A โdiagram chaseโ shows that M = P AP โ1 = ๏ฃฐ
๏ฃฎ
0
2.4.105 a Aโ1 = ๏ฃฐ 1
0
๏ฃฎ
๏ฃน
0 1
1 0
0 0 ๏ฃป and B โ1 = ๏ฃฐ 0 0
1 0
0 1
2
3
0
โ1
๏ฃน
0 โ 29
๏ฃบ
1
0 ๏ฃป.
1
0
3
๏ฃน
0
1 ๏ฃป.
0
Matrix Aโ1 transforms a wifeโs clan into her husbandโs clan, and B โ1 transforms a childโs clan into the motherโs
clan.
b B 2 transforms a womenโs clan into the clan of a child of her daughter.
c AB transforms a womanโs clan into the clan of her daughter-in-law (her sonโs wife), while BA transforms a manโs
clan into the clan of his children. The two transformations are different. (See Figure 2.67.)
Figure 2.67: for Problem 2.4.105c.
d The matrices for the four given diagrams (in the same order) are BB โ1 = I3 ,
120
Copyright c 2013 Pearson Education, Inc.
Section 2.4
๏ฃฎ
0
BAB โ1 = ๏ฃฐ 1
0
๏ฃฎ
๏ฃน
0 1
0 1
0 0 ๏ฃป , B(BA)โ1 = ๏ฃฐ 0 0
1 0
1 0
๏ฃฎ
๏ฃน
0
1 ๏ฃป , BA(BA)โ1 = I3 .
0
๏ฃน
1
0 ๏ฃป, in the second case in part (d) the cousin belongs to Bueyaโs husbandโs
0
0 0
e Yes; since BAB โ1 = Aโ1 = ๏ฃฐ 1 0
0 1
clan.
2.4.106 a We need 8 multiplications: 2 to compute each of the four entries of the product.
b We need n multiplications to compute each of the mp entries of the product, mnp multiplications altogether.
2.4.107 g(f (x)) = x, for all x, so that g โฆ f is the identity, but f (g(x)) =
y
1 โ Rk
2.4.108 a The formula
=
n
โk
L + R โ kLR
1 โ kL
x
m
x
x+1
if x is even
.
if x is odd
is given, which implies that
y = (1 โ Rk)x + (L + R โ kLR)m.
In order for y to be independent of x it is required that 1 โ Rk = 0, or k = R1 = 40 (diopters).
1
k
then equals R, which is the distance between the plane of the lens and the plane on which parallel incoming
rays focus at a point; thus the term โfocal lengthโ for k1 .
b Now we want y to be independent of the slope m (it must depend on x alone). In view of the formula above,
L+R
1
1
10
this is the case if L + R โ kLR = 0, or k =
=
+ = 40 +
โ 43.3 (diopters).
LR
R L
3
c Here the transformation is
1
1 D
1 0
y
=
โk1
0 1
โk1 1
n
0
1
1 โ k1 D
x
=
k1 k2 D โ k1 โ k2
m
D
1 โ k2 D
x
.
m
We want the slope n of the outgoing rays to depend on the slope m of the incoming rays alone, and not on x;
1
1
2
this forces k1 k2 D โ k1 โ k2 = 0, or, D = kk11+k
k2 = k1 + k2 , the sum of the focal lengths of the two lenses. See
Figure 2.68.
Figure 2.68: for Problem 2.4.108c.
121
Copyright c 2013 Pearson Education, Inc.
Chapter 2
True or False
Ch 2.TF.1 T, by Theorem 2.2.4.
Ch 2.TF.2 T, by Theorem 2.4.6.
Ch 2.TF.3 T; The matrix is
1 โ1
.
โ1
1
Ch 2.TF.4 F; The columns of a rotation matrix are unit vectors; see Theorem 2.2.3.
Ch 2.TF.5 T, by Theorem 2.4.3.
Ch 2.TF.6 T; Let A = B in Theorem 2.4.7.
Ch 2.TF.7 F, by Theorem 2.3.3.
Ch 2.TF.8 T, by Theorem 2.4.8.
Ch 2.TF.9 F; Matrix AB will be 3 ร 5, by Definition 2.3.1b.
0
0
. A linear transformation transforms ~0 into ~0.
=
Ch 2.TF.10 F; Note that T
1
0
Ch 2.TF.11 T; The equation det(A) = k 2 โ 6k + 10 = 0 has no real solution.
Ch 2.TF.12 T; The matrix fails to be invertible for k = 5 and k = โ1, since the determinant det A = k 2 โ 4k โ 5 =
(k โ 5)(k + 1) is 0 for these values of k.
Ch 2.TF.13 F; Note that det(A) = (k โ 2)2 + 9 is always positive, so that A is invertible for all values of k.
Ch 2.TF.14 F We can show by induction on m that the matrix Am is of the form Am =
1 โ
1 1/2
1 โ
m
m+1
m
A fails to be positive. Indeed, A
=A A=
=
.
0 โ
0 1/2
0 โ
Ch 2.TF.15 F; Consider A = I2 (or any other invertible 2 ร 2 matrix).
Ch 2.TF.16 T; Note that A =
1 2
3 4
โ1
1
1
1
1
5 6
7 8
โ1
is the unique solution.
Ch 2.TF.17 F, by Theorem 2.4.9. Note that the determinant is 0.
Ch 2.TF.18 T, by Theorem 2.4.3.
122
Copyright c 2013 Pearson Education, Inc.
1 โ
0 โ
for all m, so that
True or False
1
Ch 2.TF.19 T; The shear matrix A =
0
1
2
1
works.
x
4y
0 4
x
Ch 2.TF.20 T; Simplify to see that T
=
=
.
y
โ12x
โ12 0
y
Ch 2.TF.21 F; If matrix A has two identical rows, then so does AB, for any matrix B. Thus AB cannot be In , so
that A fails to be invertible.
Ch 2.TF.22 T, by Theorem 2.4.8. Note that Aโ1 = A in this case.
Ch 2.TF.23 F; For any 2 ร 2 matrix A, the two columns of A
Ch 2.TF.24 T; One solution is A =
1 1
1 1
will be identical.
1
.
0
1
0
Ch 2.TF.25 F; A reflection matrix is of the form
b
, where a2 + b2 = 1. Here, a2 + b2 = 1 + 1 = 2.
โa
a
b
Ch 2.TF.26 T Let B be the matrix whose columns are all ~xequ , the equilibrium vector of A.
Ch 2.TF.27 T; The product is det(A)I2 .
Ch 2.TF.28 T; Writing an upper triangular matrix A =
0 b
that A =
, where b is any nonzero constant.
0 0
Ch 2.TF.29 T; Note that the matrix
of 4) works.
0 โ1
1
0
a
0
1
Ch 2.TF.31 T For example, A = 13 ๏ฃฐ 1
1
๏ฃน
1 1
1 1 ๏ฃป
1 1
and B = 21
Ch 2.TF.32 F Consider A =
1
0
๏ฃฎ
1
0
0
Ch 2.TF.33 F; Consider matrix ๏ฃฐ 0
1
and solving the equation A2 =
0
0
0
0
we find
represents a rotation through ฯ/2. Thus n = 4 (or any multiple
Ch 2.TF.30 F; If a matrix A is invertible, then so is Aโ1 . But
๏ฃฎ
b
c
1
1
1
1
1 1
1 1
fails to be invertible.
, with AB =
1 1
0 0
๏ฃน
0 1
1 0 ๏ฃป, for example.
0 0
123
Copyright c 2013 Pearson Education, Inc.
Chapter 2
Ch 2.TF.34 T; Apply Theorem 2.4.8 to the equation (A2 )โ1 AA = In , with B = (A2 )โ1 A.
Ch 2.TF.35 F; Consider the matrix A that represents a rotation through the angle 2ฯ/17.
Ch 2.TF.36 F; Consider the reflection matrix A =
1
0
.
0 โ1
Ch 2.TF.37 T; We have (5A)โ1 = 51 Aโ1 .
Ch 2.TF.38 T; The equation A~ei = B~ei means that the ith columns of A and B are identical. This observation
applies to all the columns.
Ch 2.TF.39 T; Note that A2 B = AAB = ABA = BAA = BA2 .
Ch 2.TF.40 T; Multiply both sides of the equation A2 = A with Aโ1 .
Ch 2.TF.41 T See Exercise 2.3.75
Ch 2.TF.42 F Consider A =
1
0
1/2
1/2
โ1
, with A
=
1
0
โ1
2
.
Ch 2.TF.43 F; Consider A = I2 and B = โI2 .
Ch 2.TF.44 T; Since A~x is on the line onto which we project, the vector A~x remains unchanged when we project
again: A(A~x) = A~x, or A2 ~x = A~x, for all ~x. Thus A2 = A.
Ch 2.TF.45 T; If you reflect twice in a row (about the same line), you will get the original vector back: A(A~x) = ~x,
or, A2 ~x = ~x = I2 ~x. Thus A2 = I2 and Aโ1 = A.
Ch 2.TF.46 F; Let A =
0
1
1 1
, for example.
,w
~=
, ~v =
1
0
0 1
๏ฃฎ
๏ฃน
1 0
1 0 0
, B = ๏ฃฐ 0 1 ๏ฃป, for example.
Ch 2.TF.47 T; Let A =
0 1 0
0 0
Ch 2.TF.48 F; By Theorem 1.3.3, there is a nonzero vector ~x such that B~x = ~0, so that AB~x = ~0 as well. But
I3 ~x = ~x 6= ~0, so that AB 6= I3 .
Ch 2.TF.49 T; We can rewrite the given equation as A2 + 3A = โ4I3 and โ 41 (A + 3I3 )A = I3 . By Theorem 2.4.8,
the matrix A is invertible, with Aโ1 = โ 14 (A + 3I3 ).
Ch 2.TF.50 T; Note that (In + A)(In โ A) = In2 โ A2 = In , so that (In + A)โ1 = In โ A.
Ch 2.TF.51 F; A and C can be two matrices which fail to commute, and B could be In , which commutes with
anything.
124
Copyright c 2013 Pearson Education, Inc.
True or False
Ch 2.TF.52 F; Consider T (~x) = 2~x, ~v = ~e1 , and w
~ = ~e2 .
Ch 2.TF.53 F; Since there are only eight entries that are not 1, there will be at least two rows that contain only
ones. Having two identical rows, the matrix fails to be invertible.
Ch 2.TF.54 F; Let A = B =
0 0
, for example.
0 1
0 1
a b
S fails to be diagonal, for an arbitrary invertible matrix S =
Ch 2.TF.55 F; We will show that S
.
0
0
c d
c d
0 1
d โb
cd
d2
1
1
= adโbc
S = adโbc
. Since c and d cannot both be zero (as S
Now, S โ1
2
0 0
0 0
โc a
โc โcd
must be invertible), at least one of the off-diagonal entries (โc2 and d2 ) is nonzero, proving the claim.
โ1
Ch 2.TF.56 T; Consider an ~x such that A2 ~x = ~b, and let ~x0 = A~x. Then A~x0 = A(A~x) = A2 ~x = ~b, as required.
โa โb
d โb
a b
1
โ1
. This holds if
=
. Now we want A
= โA, or adโbc
Ch 2.TF.57 T; Let A =
โc โd
โc a
c d
ad โ bc = 1 and d = โa. These equations have many solutions: for example, a = d = 0, b = 1, c = โ1. More
2
generally, we can choose an arbitrary a and an arbitrary nonzero b. Then, d = โa and c = โ 1+a
b .
2
a + bc
a b
2
. We make an attempt to solve the equation A =
Ch 2.TF.58 F; Consider a 2ร2 matrix A =
ac
+ cd
c
d
2
a + bc b(a + d)
1 0
=
. Now the equation b(a + d) = 0 implies that b = 0 or d = โa.
c(a + d) d2 + bc
0 โ1
ab + bd
=
cb + d2
If b = 0, then the equation d2 + bc = โ1 cannot be solved.
If d = โa, then the two diagonal entries of A2 , a2 + bc and d2 + bc, will be equal, so that the equations a2 + bc = 1
and d2 + bc = โ1 cannot be solved simultaneously.
1 0
2
In summary, the equation A =
cannot be solved.
0 โ1
u21
u1 u2
u1
,
where
is a
u1 u2
u22
u2
unit vector. Thus, a2 + b2 + c2 + d2 = u41 + (u1 u2 )2 + (u1 u2 )2 + u42 = u41 + 2(u1 u2 )2 + u42 = (u21 + u22 )2 = 12 = 1.
Ch 2.TF.59 T; Recall from Definition 2.2.1 that a projection matrix has the form
Ch 2.TF.60 T; We observe that the systems AB~x = 0 and B~x = 0 have the same solutions (multiply with Aโ1
and A, respectively, to obtain one system from the other). Then, by True or False Exercise 45 in Chapter 1,
rref(AB) =rref(B).
Ch 2.TF.61 T For example, A =
0
1
1
0
m
, with A
=
1
0
0
1
m
for even m and A
=
0 1
1 0
for odd m.
Ch 2.TF.62 T We need to show that the system A~x = ~x or (A โ In )~x = ~0 has a nonzero solution ~x. This amounts
to showing that rank(A โ In ) < n, or, equivalently, that
rref(A โ In ) has a row of zeros. By definition of a
transition matrix, the sum of all the row vectors of A is 1 1 … 1 , so that the sum of all the row vectors
125
Copyright c 2013 Pearson Education, Inc.
Chapter 2
of A โ In is the zero row vector. If we add rows I through (n โ 1) to the last row of A โ In , we generate a row
of zeros as required.
126
Copyright c 2013 Pearson Education, Inc.
Document Preview (77 of 444 Pages)
User generated content is uploaded by users for the purposes of learning and should be used following SchloarOn's honor code & terms of service.
You are viewing preview pages of the document. Purchase to get full access instantly.
-37%
Solution Manual for Linear Algebra with Applications, 5th Edition
$18.99 $29.99Save:$11.00(37%)
24/7 Live Chat
Instant Download
100% Confidential
Store
William Taylor
0 (0 Reviews)
Best Selling
The World Of Customer Service, 3rd Edition Test Bank
$18.99 $29.99Save:$11.00(37%)
Chemistry: Principles And Reactions, 7th Edition Test Bank
$18.99 $29.99Save:$11.00(37%)
Test Bank for Hospitality Facilities Management and Design, 4th Edition
$18.99 $29.99Save:$11.00(37%)
Solution Manual for Designing the User Interface: Strategies for Effective Human-Computer Interaction, 6th Edition
$18.99 $29.99Save:$11.00(37%)
Data Structures and Other Objects Using C++ 4th Edition Solution Manual
$18.99 $29.99Save:$11.00(37%)
2023-2024 ATI Pediatrics Proctored Exam with Answers (139 Solved Questions)
$18.99 $29.99Save:$11.00(37%)