Skip to main content
Logo image

Section 5.2 Matrix representations of linear transformations

We have seen how the coordinate vector map can be used to translate a linear algebraic question posed about a finite-dimensional vector space \(V\) into a question about \(\R^n\text{,}\) where we have many computational algorithms at our disposal. We would like to extend this technique to linear transformations \(T\colon V\rightarrow W\text{,}\) where both \(V\) and \(W\) are finite-dimensional. The basic idea, to be fleshed out below, can be described as follows:
  1. Pick a basis \(B\) for \(V\text{,}\) and a basis \(B'\) for \(W\text{.}\)
  2. “Identify” \(V\) with \(\R^n\) and \(W\) with \(\R^m\) using the coordinate vector isomorphisms \([\hspace{5pt}]_B\) and \([\hspace{5pt}]_{B'}\text{,}\) respectively.
  3. “Model” the linear transformation \(T\colon V\rightarrow W\) with a certain linear transformation \(T_A\colon \R^n\rightarrow \R^m\text{.}\)
The matrix \(A\) defining \(T_A\) will be called the matrix representing \(T\) with respect to our choice of basis \(B\) for \(V\) and \(B'\) for \(W\).
In what sense does \(A\) “model” \(T\text{?}\) All the properties of \(T\) we are interested in (\(\NS T\text{,}\) \(\nullity T\text{,}\) \(\im T\text{,}\) \(\rank T\text{,}\) etc.) are perfectly mirrored by the matrix \(A\text{.}\) As a result, this technique allows us to answer questions about the original \(T\) essentially by applying a relevant matrix algorithm to \(A\text{.}\)

Subsection 5.2.1 Matrix representations of linear transformations

Given a linear transformation \(T\colon V\rightarrow W\) and choice of ordered bases \(B\) and \(B'\) of \(V\) and \(W\text{,}\) respectively, we define the matrix \(A=[T]_B^{B'}\) representing \(T\) column by column, using a familiar looking formula.

Definition 5.2.1. Matrix representations of linear transformations.

Let \(V\) and \(W\) be vector spaces with ordered bases \(B=(\boldv_1, \boldv_2, \dots, \boldv_n)\) and \(B'=(\boldw_1, \boldw_2, \dots, \boldw_m)\text{,}\) respectively. Given a linear transformation \(T\colon V\rightarrow W\text{,}\) the matrix representing \(T\) with respect to \(B\) and \(B'\), is the \(m\times n\) matrix \([T]_B^{B'}\) whose \(j\)-th column is \([T(\boldv_j)]_{B'}\text{,}\) considered as a column vector: i.e.,
\begin{equation} [T]_B^{B'}=\begin{amatrix}[cccc]\vert \amp \vert \amp \amp \vert \\ \left[T(\boldv_1)\right]_{B'}\amp [T(\boldv_2)]_{B'}\amp \dots \amp [T(\boldv_n)]_{B'} \\ \vert \amp \vert \amp \amp \vert \end{amatrix}\text{.}\tag{5.2.1} \end{equation}
In the special case where \(W=V\) and we pick \(B'=B\) we write simply \([T]_B\text{.}\)

Example 5.2.2.

Consider the linear transformation \(T\colon P_{3}\rightarrow P_{2}\) defined as \(T(p(x))=p'(x)\text{.}\) Compute \(A=[T]_{B}^{B'}\text{,}\) where \(B\) and \(B'\) are the standard bases for \(P_3\) and \(P_2\text{,}\) respectively.
Solution.
We have \(B=(x^3, x^2, x, 1)\) and \(B'=(x^2, x, 1)\text{.}\) Denote by \(\boldc_j\) the \(j\)-th column of \(A\text{.}\) We use (5.2.1) to compute:
\begin{align*} \boldc_1\amp =[T(x^3)]_{B'}=[3x^2]_{B'}=\begin{bmatrix} 3\\ 0\\ 0 \end{bmatrix} \amp \boldc_2\amp =[T(x^2)]_{B'}=[2x]_{B'}=\begin{bmatrix} 0\\ 2\\ 0 \end{bmatrix}\\ \boldc_3\amp =[T(x)]_{B'}=[1]_{B'}=\begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix} \amp \boldc_4\amp =[T(1)]_{B'}=[0]_{B'}=\begin{bmatrix} 0\\ 0\\ 0 \end{bmatrix}\text{.} \end{align*}
Thus \(A=\begin{bmatrix}3\amp 0\amp 0\amp 0\\ 0\amp 2\amp 0\amp 0\\ 0\amp 0\amp 1\amp 0 \end{bmatrix}\text{.}\)
The formula for \([T]_{B}^{B'}\) should remind you of the formula from Corollary 3.6.16 used for computing the standard matrix for a linear transformation \(T\colon \R^n\rightarrow \R^m\text{:}\) i.e., the matrix \(A\) such that \(T(\boldx)=A\boldx\) for all \(\boldx\in \R^n\text{.}\) Theorem 5.2.3 explicates this resemblance.
According to the recipe in Corollary 3.6.16 we have
\begin{equation*} A= \begin{bmatrix}\vert\amp \vert\amp \amp \vert \\ T(\bolde_1)\amp T(\bolde_2)\amp \cdots \amp T(\bolde_n)\\ \vert\amp \vert\amp \amp \vert \end{bmatrix}\text{.} \end{equation*}
Let \(B\) and \(B'\) be the standard ordered bases of \(\R^n\) and \(\R^m\text{,}\) respectively. To see why \(A=[T]_{B}^{B'}\text{,}\) observe that for all \(1\leq j\leq n\) the \(j\)-th column of \(A\) is \(T(\bolde_j)\) and the \(j\)-th column of \([T]_B^{B'}\) is \([T(\bolde_j)]_{B''}\text{.}\) That these are equal is a result of the fact that for all vectors \(\boldw\in \R^m\) we have \([\boldw]_{B'}=\boldw\text{:}\) that is, the coordinate vector of a vector \(\boldw\in \R^m\) with respect to the standard basis is just \(\boldw\) itself. (See Example 5.1.5).

Remark 5.2.4.

Let \(T\colon \R^n\rightarrow \R^m\) be a linear transformation, and let \(A\) be its standard matrix: i.e., \(A\) is the \(m\times n\) matrix satisfying \(T(\boldx)=A\boldx\) for all \(\boldx\in \R^n\text{.}\) According to Theorem 5.2.3, the standard matrix \(A\) is just one way of representing \(T\text{:}\) namely, the representation with respect to the standard bases of \(\R^n\) and \(\R^m\text{.}\) This begs the question of whether a different choice of bases might give rise to a more convenient matrix representation of \(T\text{.}\) The answer is yes, as we will see over the course of this chapter.

Example 5.2.5.

Define \(T\colon \R^2\rightarrow \R^2\) as \(T(x,y)=(4x-3y,2x-y)\text{.}\)
  1. Compute \([T]_B\text{,}\) where \(B=(\bolde_1, \bolde_2)\) is the standard basis of \(\R^2\text{.}\)
  2. Compute \([T]_{B'}\text{,}\) where \(B'=((1,1), (1,-1))\text{.}\)
Solution.
  1. According to Theorem 5.2.3, since \(B\) is the standard basis \([T]_B\) is the matrix \(A\) such that \(T=T_A\text{:}\)
    \begin{align*} [T]_B\amp=\begin{bmatrix} \vert \amp \vert \\ T(\bolde_1)\amp T(\bolde_2)\\ \vert \amp \vert \end{bmatrix} \\ \amp= \begin{amatrix}[rr] 4\amp -3\\ 2\amp -1 \end{amatrix} \text{.} \end{align*}
  2. We have
    \begin{align*} [T]_{B'}=[T]_{B'}^{B'} \amp = \begin{bmatrix} \vert\amp \vert\\ [T((1,1))]_{B'}\amp [T(1,-1)]_{B'}\\ \vert \amp \vert \end{bmatrix} \\ \amp = \begin{bmatrix} \vert\amp \vert\\ [(1,1)]_{B'}\amp [(7,3)]_{B'}\\ \vert \amp \vert \end{bmatrix}\\ \amp = \begin{amatrix}[rr] 1\amp 5\\ 0\amp 2 \end{amatrix}\text{,} \end{align*}
    where the last equality uses the fact that \([(1,1)]_{B'}=(1,0)\) and \([(7,3)]_{B'}=(5,2)\text{,}\) as you can verify yourself.
So we have \([T]_B=\begin{amatrix}[rr]4\amp -1\\ 2\amp -1 \end{amatrix}\) and \([T]_{B'}=\begin{amatrix}[rr]1\amp 5\\ 0\amp 2 \end{amatrix}\text{.}\) Moral: different choices of bases yield different matrix representations.

Subsection 5.2.2 Matrix representations as models

Before moving to more examples, we describe in what precise sense the matrix \(A=[T]_B^{B'}\) models the original linear transformation \(T\colon V\rightarrow W\text{,}\) and how we can use \(A\) to answer questions about \(T\text{.}\) The next theorem is key to understanding this.
Let \(B=(\boldv_1, \boldv_2, \dots, \boldv_n)\text{.}\)
  1. By definition we have
    \begin{equation*} [T]_B^{B'}=\begin{amatrix}[cccc]\vert \amp \vert \amp \amp \vert \\ \left[T(\boldv_1)\right]_{B'}\amp [T(\boldv_2)]_{B'}\amp \dots \amp [T(\boldv_n)]_{B'} \\ \vert \amp \vert \amp \amp \vert \end{amatrix}\text{.} \end{equation*}
    Given any \(\boldv\in V\text{,}\) we can write
    \begin{equation*} \boldv=c_1\boldv_1+c_2\boldv_2+\dots +c_n\boldv_n \end{equation*}
    for some \(c_i\in \R\text{.}\) Then
    \begin{align*} [T]_{B}^{B'}[\boldv] \amp= [T]_{B}^{B'} \begin{bmatrix} c_1\\ c_2\\ \vdots \\ v_n \end{bmatrix} \\ \amp=c_1[T(\boldv_1)]_{B'}+c_n[T(\boldv_n)]_{B'}+\cdots +c_n[T(\boldv_n)]_{B'} \amp (\text{column method})\\ \amp = [c_1T(\boldv_1)+c_2T(\boldv_2)+\cdots +c_nT(\boldv_n)]_{B'} \amp (\knowl{./knowl/th_coordinates.html}{\text{5.1.12}})\\ \amp=[T(c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n)]_{B'} \amp (T \text{ is linear})\\ \amp =[T(\boldv)]_{B'}\text{,} \end{align*}
    as desired.
  2. Assume \(A\) satisfies
    \begin{equation*} A[\boldv]_B=[T(\boldv)]_{B'} \end{equation*}
    for all \(\boldv\in V\text{.}\) Then in particular we have
    \begin{equation} A[\boldv_i]_B=[T(\boldv_i)]_{B'}\tag{5.2.3} \end{equation}
    for all \(1\leq i\leq n\text{.}\) Since \(\boldv_i\) is the \(i\)-th element of \(B\text{,}\) we have \([\boldv_i]_B=\bolde_i\text{,}\) the \(i\)-th standard basis element of \(\R^n\text{.}\) Using the column method (2.1.24), we see that
    \begin{equation*} A[\boldv_i]_B=A\bolde_i=\boldc_i, \end{equation*}
    where \(\boldc_i\) is the \(i\)-th column of \(A\text{.}\) Thus (5.2.3) implies that the \(i\)-th column of \(A\) is equal to \([T(\boldv_i)]_{B}\text{,}\) the \(i\)-th column of \([T]_B^{B'}\text{,}\) for all \(1\leq i\leq n\text{.}\) Since \(A\) and \([T]_{B}^{B'}\) have identical columns, we conclude that \(A=[T]_{B}^{B'}\text{,}\) as desired.

Remark 5.2.7. Uniqueness of \([T]_B^{B'}\).

The uniqueness property described in (2) of Theorem 5.2.6 provides an alternative way of computing \([T]_{B}^{B'}\) that can be useful in certain situations: namely, simply provide an \(m\times n\) matrix \(A\) that satisfies the defining property
\begin{equation*} A[\boldv]_B=[T(\boldv)]_{B'} \end{equation*}
for all \(\boldv\in V\text{.}\) Since there is only one such matrix, we must have \(A=[T]_B^{B'}\) in this case.
Let \(T\colon V\rightarrow W\text{,}\) \(B\text{,}\) and \(B'\) be as in Theorem 5.2.6. The defining property (5.2.2) of \([T]_B^{B'}\) can be summarized by saying that the following diagram is commutative.
Figure 5.2.8. Commutative diagram for \([T]_B^{B'}\)
The diagram being commutative here means that starting with an element \(\boldv\in V\) in the top left of the diagram, whether we travel to the bottom right of the diagram either by first applying \(T\) and then applying \([\hspace{5pt}]_{B'}\) (“go right, then down”), or else by first applying \([\hspace{5pt}]_B\) and then applying \([T]_B^{B'}\) (“go down, then right”), we get the same result! (The bottom map should technically be labeled \(T_A\text{,}\) where \(A=[T]_B^{B'}\text{,}\) but this would detract from the elegance of the diagram.)
Besides commutativity, the other import feature of Figure 5.2.8 is that the two vertical coordinate transformations identify the domain and codomain of \(T\) with the familiar spaces \(\R^n\) and \(\R^m\) in a one-to-one manner. (Using the language of Section 3.9, these maps are isomorphisms.) These properties together allow us to translate any linear algebraic question about \(T\) to an equivalent question about the matrix \(A\text{,}\) as the following theorem indicates.
We have
\begin{align*} \boldv\in \NS T \amp\iff T(\boldv)=\boldzero \\ \amp\iff [T(\boldv)]_{B'}=[\boldzero]_{B'} \amp (\knowl{./knowl/th_coordinates.html}{\text{Theorem 5.1.12}}, (2))\\ \amp \iff [T]_{B}^{B'}[\boldv]_B=\boldzero \amp (\knowl{./knowl/eq_matrixrep_prop.html}{\text{(5.2.2)}})\\ \amp [\boldv]_B\in\NS A \amp (A=[T]_B^{B'}) \text{.} \end{align*}
We have
\begin{align*} \boldw\in \im T \amp\iff \boldw=T(\boldv) \text{ for some } \boldv\in V \\ \amp\iff [\boldw]_{B'}=[T(\boldv)]_{B'} \text{ for some } \boldv \in V\amp (\knowl{./knowl/th_coordinates.html}{\text{Theorem 5.1.12}}-(2)) \\ \amp\iff [\boldw]_{B'}=[T]_{B}^{B'}[\boldv]_B \text{ for some } \boldv\in V \amp (\knowl{./knowl/eq_matrixrep_prop.html}{\text{(5.2.2)}}) \\ \amp\iff [\boldw]_{B'}=A\boldx \text{ for some } \boldx\in \R^n \amp (A=[T]_B^{B'}, \knowl{./knowl/th_coordinates.html}{\text{Theorem 5.1.12}}-(3))\\ \amp \iff [\boldw]_{B'}\in \CS A\text{.} \end{align*}
The last equivalence follows from the fact that
\begin{equation*} \CS A=\im T_A=\{\boldb\in \R^m\colon \boldb=A\boldx \text{ for some } \boldx\in \R^n\}\text{.} \end{equation*}

Example 5.2.10.

Consider again Example 5.2.2, where we modeled the linear transformation
\begin{align*} T\colon P_3 \amp\rightarrow P_2 \\ p(x)\amp\mapsto p'(x) \end{align*}
as
\begin{equation*} [T]_B^{B'}=\begin{bmatrix}3\amp 0\amp 0\amp 0\\ 0\amp 2\amp 0\amp 0\\ 0\amp 0\amp 1\amp 0 \end{bmatrix}\text{.} \end{equation*}
Here \(B=(x^3, x^2, x, 1)\) and \(B'=(x^2, x, 1)\) are the respective standard bases of \(P_3\) and \(P_2\text{.}\)
Let \(A=[T]_B^{B'}\text{.}\) By inspection, we see easily that
\begin{align*} \NS \amp= \Span\{(0,0,0,1)\}\amp \CS A\amp =\Span \{(3,0,0), (0,2,0),(0,0,1)\}=\R^3 \text{.} \end{align*}
Using Theorem 5.2.9 we can lift these spanning sets back to \(P_3\) and \(P_2\) to conclude that
\begin{align*} \NS T \amp =\Span\{1\}\subseteq P_3 \amp \im T\amp =\Span\{3x^2, x, 1\}=P_2 \text{.} \end{align*}
What do these results tell us about the differential operation \(T(p)=p'\text{?}\)
From \(\NS T=\Span\{1\}=\{p(x)\in P_2\colon p(x)=c \text{ for all x}\}\text{,}\) we conclude that \(T(p)=p'=\boldzero\) if and only if \(p(x)=c\) for some \(c\in \R\text{:}\) i.e., the polynomials whose derivative is equal to the zero function are precisely the constant polynomials.
From \(\im T=P_2\text{,}\) we conclude that for all \(q(x)\in P_2\) there is a \(p(x)\in P_3\) such that \(p'(x)=q(x)\text{.}\) Using the language of calculus, this means that every polynomial of degree at most two has an antiderivative that itself is a polynomial of degree at most three.
Linear algebra here reveals some properties of the derivative, in the restricted context of polynomial functions, that calculus shows to be true more generally: namely, that a function \(f\in C^1(\R)\) satisfies \(f'=\boldzero\) if and only if \(f\) is a constant function, and that every continuous function \(g\in C(\R)\) has an antiderivative.

Video example: matrix representations.

Figure 5.2.11. Video: matrix representations

Subsection 5.2.3 Choice of basis

Given a linear transformation \(T\colon V\rightarrow W\text{,}\) different choice of bases for \(V\) and \(W\) give rise to different matrix representations of \(T\text{.}\) This observation immediately gives rise to two questions: What is the precise relationship between two different matrix representations?; How should we choose our bases so that the resulting matrix representation is useful to us? We will take up these questions in earnest in the subsequent sections. For now we content ourselves with an illustrative, long-format example.

Example 5.2.12. Two representations of orthogonal projection.

Let \(W\subseteq \R^3\) be the plane passing through the origin with normal vector \(\boldn=(1,1,1)\) (with respect to the dot product): i.e.,
\begin{equation*} W=\{(x,y,z)\in \R^3\colon x+y+z\}\text{.} \end{equation*}
Consider the orthogonal projection transformation \(\operatorname{proj}_W\colon \R^3\rightarrow \R^3\text{.}\) With respect to the standard basis \(B=(\bolde_1, \bolde_2, \bolde_3)\) we know from Theorem 5.2.3 that \([\operatorname{proj}_W]_B=[\operatorname{proj}_W]_B^B\) is just the matrix \(A\) such that \(\operatorname{proj}_W=T_A\text{.}\) Using the general formaul for projection onto a plane derived in Example 4.3.17, we conclude:
\begin{equation*} [T]_B=A=\frac{1}{3}\begin{amatrix}[rrr] 2\amp -1\amp -1\\ -1\amp 2\amp -1\\ -1\amp -1\amp 2 \end{amatrix}\text{.} \end{equation*}
Now consider a nonstandard basis \(B'\) of \(\R^3\) constructed with an eye toward the geometry involved in the definition of this projection operator. Namely, we begin with an orthogonal basis of the plane \(W\) and extend to an orthogonal basis of \(\R^3\text{.}\) The vectors \((1,-1,0), (1,1,-22)\) clearly form an orthogonal basis of \(W\text{.}\) To complete the basis we simply add the normal vector \(\boldn\text{:}\)
\begin{equation*} B'=\left( \underset{\boldv_1}{(1,-1,0)}, \underset{\boldv_2}{(1,1,-2)}, \underset{\boldv_3}{(1,1,1)} \right)\text{.} \end{equation*}
Let \(A'=[T]_B'\text{,}\) and for all \(1\leq i\leq 3\) let \(\boldc_i\) be the \(i\)-th column of \(A\text{.}\) Using (5.2.1) we compute
\begin{align*} \boldc_1 = [\proj{\boldv_1}{W}]_{B'}\amp =[\boldv_1]_{B'} \amp (\boldv_1\in W)\\ \amp =(1,0,0) \amp (\boldv_1=1\boldv_1+0\boldv_2+0\boldv_3)\\ \boldc_2 = [\proj{\boldv_2}{W}]_{B'}\amp =[\boldv_2]_{B'} \amp (\boldv_2\in W)\\ \amp =(0,1,0) \amp (\boldv_2=0\boldv_1+1\boldv_2+0\boldv_3)\\ \boldc_3 = [\proj{\boldv_3}{W}]_{B'}\amp =[\boldzero]_{B'} \amp (\boldv_3\in W^\perp)\\ \amp =(0,0,0) \amp (\boldzero=0\boldv_1+0\boldv_2+0\boldv_3)\text{.} \end{align*}
Thus
\begin{equation*} [T]_{B'}=A'=\begin{amatrix}[rrr]1\amp 0\amp 0\\ 0\amp 1\amp 0\\ 0\amp 0\amp 0 \end{amatrix}\text{.} \end{equation*}
Wow, \(A'\) is way simpler! How can the two very different matrices \(A\) and \(A'\) represent the same linear transformation? A useful way of thinking about this is to consider \(A\) and \(A'\) as two matrix formulas for \(\operatorname{proj}_W\) with respect to two different coordinate systems. This can be made precise by using the defining properties ((5.2.2)) of \(A=[T]_B\) and \(A'=[T]_{B'}\text{.}\) For \(A=[T]_B\) we have
\begin{equation} A[\boldx]_B=[\proj{\boldx}{W}]_B \implies A\boldx=\proj{\boldx}{W}\text{,}\tag{5.2.4} \end{equation}
where we have used the fact that for the standard basis \(B\) we have \([\boldy]_B=\boldy\) for any \(\boldy\in \R^3\text{.}\) Thus we can compute \(\proj{\boldx}{W}\) directly with \(A\) as \(A\boldx\text{.}\) For \(A'=[T]_{B'}\text{,}\) on the other hand, property (5.2.2) reads as
\begin{equation} A'[\boldx]_{B'}=[\proj{\boldx}{W}]_{B'}\text{.}\tag{5.2.5} \end{equation}
Formula (5.2.5) indicates that we cannot use \(A'\) directly to compute \(\proj{\boldx}{W}\text{.}\) Rather, we must first compute the \(B'\)-coordinates of \(\boldx\) and then multiply by \(A'\text{,}\) at which point the \(B'\)-coordinates of \(\proj{\boldx}{W}\) are returned. In other words, \(A'\) describes the operation of \(\operatorname{proj}_W\) with respect \(B'\)-coordinates of vectors in \(\R^3\text{.}\) As such, \(A\) may be more useful to us in terms of computing \(\operatorname{proj}_W\) directly. However, \(A'\) has the advantage of giving us a clear conceptual picture of projection. For example, we see directly that \(A'\) has nullity one and rank 2, and thus the same is true of \(\operatorname{proj}_W\text{.}\) Furthermore, understanding that the columns of \(A'\) describe how \(\operatorname{proj}_W\) acts on the basis \(B'\text{,}\) we see easily that \(\operatorname{proj}_W\) acts as the identity on the 2-dimensional space spanned by the first two basis elements, and sends the subspace spanned by the third basis element to \(\boldzero\text{.}\) In other words, \(A'\) neatly encodes the conceptual picture of \(\operatorname{proj}_W\) as an operator that fixes the plane \(W=\Span\{(1,-1,0),(1,1,-2)\}\) and sends everything in \(W^\perp=\Span\{(1,1,1)\}\) to \(\boldzero\text{.}\)

Exercises 5.2.4 Exercises

Exercise Group.

Compute \([T]_B^{B'}\) for each provided \(T\colon V\rightarrow W\) and choice of bases \(B\) and \(B'\) of \(V\) and \(W\text{.}\) You may assume that the given \(T\) is a linear transformation.
1.
\(T\colon P_3\rightarrow P_4\text{,}\) \(T(p(x))=p'(x)-x\, p(x)\text{;}\) \(B=(x^3, x^2, x, 1)\text{;}\) \(B'=(x^4, x^3, x^2, x,1)\)
2.
\(T\colon P_2\rightarrow M_{22}\text{,}\) \(T(p(x))=\begin{bmatrix} p(-1)\amp p(0)\\ p(1)\amp p(2) \end{bmatrix}\text{;}\) \(B=(x^2,x,1)\text{;}\) \(B'=(E_{11}, E_{12}, E_{21}, E_{22})\)
3.
\(T\colon \R^2\rightarrow \R^2\text{,}\) \(T=T_A\) where \(A=\begin{bmatrix} 1\amp 2\\ 2\amp 4 \end{bmatrix}\text{;}\) \(B=B'=\left((1,2),(2,4)\right)\)

4.

Suppose \(T\colon P^3\rightarrow P^2\) is a linear transformation with matrix representation
\begin{equation*} [T]_B^{B'}=\begin{bmatrix} 2\amp 0\amp 0\amp -1\\ 0\amp 1\amp -1\amp 0\\ 1\amp 1\amp 1\amp 1 \end{bmatrix}\text{,} \end{equation*}
where \(B=(x^3, x^2, x, 1)\) and \(B'=(x^2, x, 1)\text{.}\) Use the defining property (5.2.2) of \([T]_B^{B'}\) to determine the formula for \(T(ax^3+bx^2+cx+d)\) for an arbitrary polynomial \(ax^3+bx^2+cx+d\in P_3\text{.}\)

5.

Suppose \(T\colon \R^2\rightarrow \R^2\) is a linear transformation with matrix representation
\begin{equation*} [T]_{B}=\begin{bmatrix} 1\amp 1\\ 1\amp 2 \end{bmatrix}\text{,} \end{equation*}
where \(B=\left( (1,2), (-1,1)\right)\)
  1. Use the defining property (5.2.2) of \([T]_B\) to compute \(T((1,0))\) and \(T((0,1))\text{.}\) (You will first need to compute \([(1,0)]_B\) and \([(0,1)]_B\text{.}\) )
  2. Use (a) and the fact that \(T\) is linear to give a general formula for \(T(x,y)\) in terms of \(x\) and \(y\text{.}\)

6.

The function \(S\colon M_{22}\rightarrow M_{22}\) defined as \(S(A)=A^T-A\) is a linear transformation.
  1. Compute \(C=[S]_B\text{,}\) where \(B\) is the standard basis of \(M_{22}\text{.}\)
  2. Use Theorem 5.2.9 to lift bases of \(\NS C\) and \(\CS C\) back to bases for \(\NS S\) and \(\im S\text{.}\)
  3. Identify \(\NS S\) and \(\im S\) as familiar subspaces of matrices.

7.

The function \(T\colon P_2\rightarrow P_3\) defined by \(T(p(x))=xp(x-3)\) is a linear transformation.
  1. Compute \(A=[T]_B^{B'}\text{,}\) where \(B\) and \(B'\) are the standard bases of \(P_2\) and \(P_3\text{.}\)
  2. Use Theorem 5.2.9 to lift bases of \(\NS A\) and \(\CS A\) back to bases for \(\NS T\) and \(\im T\text{.}\)

8.

Let \(T\colon \R^2\rightarrow \R^3\) be the linear transformation defined as \(T(x,y)=(x-y, x+2y, y)\text{,}\) and let \(B=\left((1,0), (0,1)\right)\text{,}\) \(B'=\left((1,1),(1,-1)\right)\text{,}\) \(B''=\left((1,0,0),(0,1,0),(0,0,1)\right)\)
  1. Compute \([T]_B^{B''}\text{.}\)
  2. Compute \([T]_{B'}^{B''}\text{.}\)

9.

Let \(T\colon P_2\rightarrow M_{22}\) be the linear transformation defined as
\begin{equation*} T(p(x))=\begin{bmatrix} p(0)\amp p(1)\\ p(-1)\amp p(0) \end{bmatrix}\text{,} \end{equation*}
and let \(B=(1,x,x^2)\text{,}\) \(B'=(1, 1+x, 1+x^2)\text{,}\) \(B''=(E_{11}, E_{12}, E_{21}, E_{22})\) (the standard basis of \(M_{22}\)).
  1. Compute \([T]_B^{B''}\text{.}\)
  2. Compute \([T]_{B'}^{B''}\text{.}\)

10.

Let \(T_1\colon V\rightarrow W\) and \(T_2\colon W\rightarrow U\) be linear transformations, and suppose \(B, B', B''\) are ordered bases for \(V\text{,}\) \(W\text{,}\) and \(U\text{,}\) respectively. Prove:
\begin{equation*} [T_2\circ T_1]_B^{B''}=[T_2]_{B'}^{B''}[T_1]_B^{B'}\text{.} \end{equation*}
Hint.
Let \(A_1=[T_1]_B^{B'}\) and \(A_2=[T_2]_{B'}^{B''}\text{.}\) Show that the matrix \(A_2A_1\) satisfies the defining property of \([T_2\circ T_1]_{B}^{B''}\text{:}\) i.e.,
\begin{equation*} A_2A_1[\boldv]_B=[T_2\circ T_1(\boldv)]_{B''} \end{equation*}
for all \(\boldv\in V\text{.}\)