Skip to main content
Logo image

Section 3.4 Null space and image

In this section we introduce two subspaces that are associated naturally to a linear transformation \(T\colon V\rightarrow W\text{:}\) the null space and image.

Subsection 3.4.1 Null space and image of a linear transformation

Definition 3.4.1. Null space and image.

Let \(T\colon V\rightarrow W\) be a linear transformation.
  1. Null space.
    The null space of \(T\text{,}\) denoted \(\NS T\text{,}\) is defined as
    \begin{equation*} \NS T=\{\boldv\in V\colon T(\boldv)=\boldzero_W\}\text{.} \end{equation*}
  2. Image.
    The image (or range) of \(T\text{,}\) denoted \(\im T\text{,}\) is defined as
    \begin{equation*} \im T=\{\boldw\in W\colon \boldw=T(\boldv) \text{ for some } \boldv\in V \}\text{.} \end{equation*}

Remark 3.4.2.

A few remarks:
  1. Let \(T\colon V\rightarrow W\text{.}\) It is useful to keep in mind where \(\NS T\) and \(\im T\) “live” in this picture: we have \(\NS T\subseteq V\) and \(\im T\subseteq W\text{.}\) In other words, the null space is a subset of the domain, and the image is a subset of the codomain.
  2. Note that the image \(\im T\) of a linear transformation is just its image when considered simply as a function of sets. (See Definition 0.2.6.)
  3. The notion of a null space is analogous to the set of zeros (or roots) of a real-valued function \(f\colon X\rightarrow \R\text{,}\)
    \begin{equation*} \{x\in X\colon f(x)=0\}\text{,} \end{equation*}
    and “the zeros of \(T\)” is a useful English shorthand for \(\NS T\text{.}\) However, there is an important difference between the null space of a linear transformation and the zeros of an arbitrary real-valued function: the null space of a linear transformation comes with the added structure of a vector space (Theorem 3.4.8), whereas the zeros of an arbitrary function in general do not.
    The same observation can be made about the image of a linear transformation (Theorem 3.4.8), in comparison to the image of an arbitrary function.
(a) Null space lives in the domain; image lives in the codomain.
(b) The entire null space gets mapped to \(\boldzero_W\text{.}\)
(c) The entire domain is mapped to \(\im T\text{.}\)
Figure 3.4.3. Null space and image

Example 3.4.4. Matrix transformation.

Let
\begin{equation*} A=\begin{amatrix}[rrrr] 1\amp 2\amp 3\amp 4\\ 2\amp 4\amp 6\amp 8 \end{amatrix}\text{,} \end{equation*}
and let \(T_A\colon \R^4\rightarrow \R^2\) be its associated matrix transformation. Give parametric descriptions of \(\NS T_A\) and \(\im T_A\text{.}\)
Solution.
By definition
\begin{align*} \NS T_A \amp=\{\boldx=(x_1,x_2,x_3,x_4)\colon A\boldx=\boldzero\} \text{.} \end{align*}
Thus we must solve the matrix equation \(A\boldx=\boldzero\text{.}\) The corresponding augmented matrix row reduces to
\begin{equation*} \begin{amatrix}[rrrr|r] \boxed{1}\amp 2\amp 3\amp 4\amp 0\\ 0\amp 0\amp 0\amp 0\amp 0 \end{amatrix}\text{.} \end{equation*}
Following Procedure 1.3.5 we conclude that
\begin{equation*} \NS T_A=\{(-2r-3s-4t,r,s,t)\colon r,s,t\in \R\}\text{.} \end{equation*}
Next, \(\im T_A\) is the set of \(\boldy=(a,b)\) for which there is an \(\boldx\in \R^4\) satisfying \(A\boldx=\boldy\text{.}\) Thus we are asking which choices of \(\boldy=(a,b)\) make the linear system
\begin{equation*} \begin{linsys}{4} x_1\amp +\amp 2x_2\amp +\amp 3x_3\amp+\amp 4x_4\amp=\amp a\\ 2x_1\amp +\amp 4x_2\amp +\amp 6x_3\amp+\amp 8x_4\amp=\amp b \end{linsys} \end{equation*}
consistent. Again, Gaussian elimination gives us our answer. The corresponding augmented matrix row reduces to
\begin{equation*} \begin{amatrix}[rrrr|r] \boxed{1}\amp 2\amp 3\amp 4\amp a\\ 0\amp 0\amp 0\amp 0 \amp b-a \end{amatrix}\text{,} \end{equation*}
and conclude from Procedure 1.3.5 that the system is consistent if and only if \(a-b=0\text{,}\) or \(a=b\text{.}\) Thus
\begin{equation*} \im T_A=\{(a,b)\in \R^2\colon a=b\}=\{(t,t)\colon t\in \R\}\text{.} \end{equation*}
This first example illustrates that in the special case of a matrix transformation \(T_A\colon \R^n\rightarrow \R^m\text{,}\) where \(A\) is an \(m\times n\) matrix, we have
\begin{equation*} \NS T_A=\{\boldx\in \R^n\colon T_A(\boldx)=\boldzero\}=\{\boldx\in \R^n\colon A\boldx=\boldzero\}\text{.} \end{equation*}
In other words, the null space of a matrix transformation \(T_A\) is just the set of solutions to the matrix equation \(A\boldx=\boldzero\text{.}\) The situation arises frequently enough that it deserves its own notation.

Definition 3.4.5. Null space of a matrix.

Let \(A\) be an \(m\times n\) matrix. The null space of \(A\text{,}\) denoted \(\NS A\text{,}\) is defined as
\begin{equation*} W=\{\boldx\in \R^n\colon A\boldx=\boldzero\}\text{.} \end{equation*}
Equivalently, \(\NS A=\NS T_A\text{.}\)

Example 3.4.6.

Define \(S\colon M_{nn}\rightarrow M_{nn}\) as \(S(A)=A^T-A\text{.}\)
  1. Prove that \(S\) is linear.
  2. Identify \(\NS S\) as a familiar family of matrices.
  3. Identify \(\im S\) as a familiar family of matrices.
Solution.
  1. Linearity is an easy consequence of transpose properties. For any \(A_1, A_2\in M_{nn}\) and \(c_1,c_2\in \R\text{,}\) we have
    \begin{align*} S(c_1A_1+c_2A_2) \amp= (c_1A_1+c_2A_2)^T-(c_1A_1+c_2A_2) \\ \amp = c_1A_1^T+c_2A_2^T-c_1A_1-c_2A_2\amp (\knowl{./knowl/eg_transform_transpose.html}{\text{3.2.20}}) \\ \amp =c_1(A_1^T-A_1)+c_2(A_2^T-A_2)\\ \amp =c_1S(A_1)+c_2S(A_2)\text{.} \end{align*}
  2. We have
    \begin{align*} \NS S \amp= \{A\in M_{nn}\colon S(A)=\boldzero\} \\ \amp=\{A\in M_{22}\colon A^T-A=\boldzero\} \\ \amp=\{A\in M_{22}\colon A^T=A\} \text{.} \end{align*}
    Thus \(\NS S\) is the subspace of symmetric \(n\times n\) matrices!
  3. Let \(W=\{B\in M_{nn}\colon B^T=-B\}\text{,}\) subspace of skew-symmetric \(n\times n\) matrices. We claim \(\im S=W\text{.}\) As this is a set equality, we prove it by showing the two set inclusions \(\im S\subseteq W\) and \(W\subseteq \im S\text{.}\) (See Basic set properties)
    The inclusion \(\im S\subseteq W\) is the easier of the two. If \(B\in \im S\text{,}\) then \(B=S(A)=A^T-A\) for some \(A\in M_{nn}\text{.}\) Using various properties of transposition, we have
    \begin{equation*} B^T=(A^T-A)^T=(A^T)^T-A^T=-(A^T-A)=-B\text{,} \end{equation*}
    showing that \(B\) is skew-symmetric, and thus \(B\in W\text{,}\) as desired.
    The inclusion \(W\subseteq \im S\) is trickier: we must show that if \(B\) is skew-symmetric, then there is an \(A\) such that \(B=S(A)=A^T-A\text{.}\) Assume we have a \(B\) with \(B^T=-B\text{.}\) Letting \(A=-\frac{1}{2}B\) we have
    \begin{equation*} A^T-A=(-\frac{1}{2}B)^T+\frac{1}{2}B=\frac{1}{2}(-B^T+B)=\frac{1}{2}(B+B)=B\text{.} \end{equation*}
    Thus we have found a matrix \(A\) satisfying \(S(A)=B\text{.}\) It follows that \(B\in\im T\text{.}\)

Example 3.4.7. The derivative (calculus refresher).

Let \(T\colon C^1(\R)\rightarrow C(\R)\) be the differential operator \(T(f)=f'\text{.}\) (See 3.3.23.) Recall that for function spaces the zero vector \(\boldzero\) is by definition the zero function on \(\R\text{.}\)
The null space of \(T\) is the set of all differentiable functions whose derivative is the zero function:
\begin{equation*} \NS T=\{f\in C^1(\R)\colon f'(x)=0 \text{ for all } x\in \R\}\text{.} \end{equation*}
From calculus we know that this is precisely the set of all constant functions. Thus
\begin{equation*} \NS T=\{f\in C^1(\R)\colon f \text{ a constant function}\}\text{.} \end{equation*}
The image of \(T\) is defined as
\begin{align*} \im T\amp =\{g\in C(\R)\colon g=T(f) \text{ for some } f\in C^1(\R)\}\\ \amp= \{g\in C(\R)\colon g=f' \text{ for some } f\in C^1(\R)\}\text{.} \end{align*}
In other words, \(\im T\) is the set of continuous functions that are the derivative of some other function: i.e., the set of continuous functions that have an antiderivative. The fundamental theorem of calculus implies that in fact every continuous function \(g\) has an antiderivative! Indeed, we may take \(f(x)=\int_0^xg(t)\, dt\text{.}\) We conclude that \(\im T=C(\R)\text{.}\)
You may have noticed that in the examples above the null space and image of the given linear transformation turned out to be subspaces. This is no accident!
Null space of \(T\).
We use the two-step technique to prove \(\NS T\) is a subspace.
  1. Since \(T(\boldzero_V)=\boldzero_W\) (Theorem 2.2.11), we see that \(\boldzero_V\in \NS T\text{.}\)
  2. Suppose \(\boldv_1, \boldv_2\in \NS T\text{.}\) Given any \(c,d\in \R\text{,}\) we have
    \begin{align*} T(c\boldv_1+d\boldv_2) \amp=cT(\boldv_1)+dT(\boldv_2) \amp (T \text{ is linear, } \knowl{./knowl/th_trans_props.html}{\text{Theorem 2.2.11}})\\ \amp=c\boldzero_W+d\boldzero_W \amp (\boldv_1, \boldv_2\in \NS T) \\ \amp = \boldzero_W\text{.} \end{align*}
    This shows that \(c\boldv_1+d\boldv_2\in \NS T\text{,}\) completing our proof.
Image of \(T\).
The proof proceeds in a similar manner.
  1. Since \(T(\boldzero_V)=\boldzero_W\) (Theorem 2.2.11), we see that \(\boldzero_W\) is “hit” by \(T\text{,}\) and hence is a member of \(\im T\text{.}\)

Remark 3.4.9.

In the special case where \(T=T_A\) is the matrix transformation of an \(m\times n\) matrix \(A\text{,}\) Theorem 3.4.8 tells us that
\begin{equation*} \NS A=\{\boldx\in \R^n\colon A\boldx=\boldzero\} \end{equation*}
is a subspace. We knew this already from Theorem 3.3.10, which we now understand as a special instance of Theorem 3.4.8.
Theorem 3.4.8 gives rise to the following indirect method of proving that a subset \(W\) is a subspace.

Example 3.4.11.

Define the subset \(W\) of \(P_2\) as
\begin{equation*} W=\{p\in P_2\colon p(-1)=p(2)=p(3)=0\}\text{.} \end{equation*}
Prove that \(W\) is a subspace by identifying it as the null space of a linear transformation.
Solution.
Define \(T\colon P_2\rightarrow \R^3\) to be the evaluation transformation defined as \(T(p)=(p(-1), p(2), p(3))\text{.}\) It is a straightforward exercise to show \(T\) is a linear transformation. Furthermore, it is clear that \(W=\NS T\text{.}\) We conclude that \(W\) is a subspace.
It is somewhat tricky to give a simple, concise description of the image of a linear transformation. As the next example illustrates, however, we can often come up with a parametric description by relating the problem to systems of linear equations.

Example 3.4.12. Image computation.

Consider the matrix transformation \(T_A\colon \R^2\rightarrow\R^3\text{,}\) where
\begin{equation*} A=\begin{bmatrix}1\amp 1\\ 2\amp 1\\ 3\amp 5 \end{bmatrix}\text{.} \end{equation*}
Give a parametric description of \(\im T_A\) and identify it as a familiar geoetric object.
Solution.
By definition \(\im T_A\) is the set
\begin{equation*} \{\boldy\in\R^3\colon \boldy=T_A(\boldx) \text{ for some \(\boldx\in \R^3\) } \}=\left\{\boldy\colon \boldy=A\boldx \text{ for some \(\boldx\in\R^2\) } \right\}\text{.} \end{equation*}
Thus to compute \(\im T_A\) we must determine which choice of \(\boldy=(a,b,c)\) makes the system \(A\boldx=\boldy\) consistent. We answer this using our old friend Procedure 1.3.5:
\begin{align*} \begin{amatrix}[rr|r] 1\amp 1\amp a\\ 2\amp 1\amp b\\ 3\amp 5\amp c \end{amatrix} \amp \xrightarrow[r_3-3r_1]{r_2-2r_1} \begin{amatrix}[rr|r] 1\amp 1\amp a\\ 0\amp 1\amp 2a-b\\ 0\amp 0\amp -7a+2b+c \end{amatrix}\text{.} \end{align*}
Thus for the system to be consistent we need \(-7a+2b+c=0\text{,}\) and we conclude
\begin{equation*} \im T_A=\{(a,b,c)\colon 7a+2b+c=0\}\text{.} \end{equation*}
Geometrically we recognize this as the plane passing through \((0,0,0)\) with normal vector \(\boldn=(-7,2,1)\text{.}\) To describe it parametrically we can use Procedure 1.3.5 again on the equation \(7a+2b+c=0\text{.}\) The unknowns \(b\) and \(c\) are free here, and we see that
\begin{equation*} \im T=\{(a,b,c)\colon 7a+2b+c=0\}=\left\{\left(\frac{1}{7}(-2r-s), r,s \right)\colon r,s\in\R\right\}\text{.} \end{equation*}
Our last example in this subsection applies the concept of null space to differential equations.

Example 3.4.13. A differential equation.

Fix an interval \(X\subseteq \R\text{.}\) Let \(S\) be the set of functions of \(C^1(X)\) satisfying the differential equation
\begin{equation} f'=f\text{:}\tag{3.4.1} \end{equation}
i.e., \(S=\{f\in C^1(\R)\colon f'(x)=f(x) \text{ for all } x\in X\} \text{.}\) Define \(T\colon C^1(X)\rightarrow C(X)\) as the differential operator \(T(f)=f'-f\text{.}\) We have
\begin{align*} f\in S \amp\iff f'=f \\ \amp\iff f'-f=\boldzero \\ \amp\iff T(f)=\boldzero \\ \amp \iff f\in \NS T\text{.} \end{align*}
Thus \(S=\NS T\text{,}\) and we see that the set of solutions to (3.4.1) has the structure of a subspace. That is helpful information for us. For example, since \(S=\NS T\) is closed under vector addition and scalar multiplication, we know that if \(f\) and \(g\) are solutions to (3.4.1), then so is \(cf+dg\) for any \(c,d\in\R\text{.}\)

Subsection 3.4.2 Injective and surjective linear transformations

Recall the notions of injectivity and surjectivity from Definition 0.2.7: a function \(f\colon X\rightarrow Y\) is injective (or one-to-one) if for all \(x,x'\in X\) we have \(f(x)=f(x')\) implies \(x=x'\text{;}\) it is surjective (or onto) if for all \(y\in Y\) there is an \(x\in X\) with \(f(x)=y\text{.}\) As with all functions, we will be interested to know whether a given linear transformation is injective or surjective; as it turns out, the concepts of null space and image give us a convenient manner of answering these questions. As remarked in Definition 0.2.7, there is in general a direct connection between the surjectivity and the image of a function: namely, \(f\colon X\rightarrow Y\) is surjective if and only if \(\im f=Y\text{.}\) It follows immediately that a linear transformation \(T\colon V\rightarrow W\) is surjective if and only if \(\im T=W\text{.}\) As for injectivity, it is relatively easy to see that if a linear transformation \(T\) is injective, then its null space must consist of just the zero vector of \(V\text{.}\) What is somewhat surprising is that the converse is also true, as described in (2) of the theorem below.
  1. We have
    \begin{align*} T(\boldv)=T(\boldv') \amp \iff T(\boldv')-T(\boldv)=\boldzero\\ \amp \iff T(\boldv'-\boldv)=\boldzero \amp (T \text{ is linear})\\ \amp \iff \boldu=\boldv'-\boldv\in\NS T \amp (\text{def. } \NS T)\\ \amp \iff \boldv'=\boldv+\boldu \text{ for some } \boldu\in\NS T \text{.} \end{align*}
    Equation (3.4.3) follows directly from (3.4.2) by observing that if \(T(\boldv)=\boldw\text{,}\) then \(T(\boldv')=\boldw\) if and only if \(T(\boldv)=T(\boldv')\text{.}\)
  2. According to (3.4.2) we have \(T(\boldv)=T(\boldv')\) if and only if \(\boldv'=\boldv+\boldu\) for some \(\boldu\in \NS T\text{.}\)
    If \(\NS T=\{\boldzero_V\}\text{,}\) then \(T(\boldv)=T(\boldv')\) implies \(\boldv'=\boldv+\boldzero=\boldv\text{.}\) Thus \(T\) is injective in this case.
    Conversely, if \(\NS T\ne \{\boldzero_V\}\) we can find a nonzero \(\boldu\in \NS T\text{.}\) It follows that for any \(\boldv\in V\) we have \(T(\boldv)=T(\boldv+\boldu)\text{.}\) Furthermore, since \(\boldu\ne\boldzero_V\text{,}\) we have \(\boldv\ne \boldv+\boldu\text{.}\) Thus \(T\) is not injective in this case.

Remark 3.4.15.

To determine whether a function of sets \(f\colon X\rightarrow Y\) is injective, we normally have to show that for each output \(y\) in the image of \(f\) there is exactly one input \(x\) satisfying \(f(x)=y\text{.}\) Think of this as checking injectivity at every output. Theorem 3.4.14 tells us that in the special case of a linear transformation \(T\colon V\rightarrow W\) it is enough to check injectivity at exactly one ouput: namely, \(\boldzero\in W\text{.}\)
Let \(T\colon V\rightarrow W\) be a linear transformation, and let \(\boldw\in W\text{.}\) Equation (3.4.3) can be interpreted as follows: if we can find one particular input \(\boldv_p\) satisfying \(T(\boldv_p)=\boldw\text{,}\) then the set \(X_\boldw\) of all inputs \(\boldv\) satisfying \(T(\boldv)=\boldw\) is given by
\begin{equation*} X_\boldw=\{\boldv_p+\boldu\colon \boldu\in \NS T\}\text{.} \end{equation*}
This set \(X_\boldw\) is not necessarily a subspace. Indeed, if \(\boldw\ne \boldzero_W\text{,}\) then \(\boldzero\notin X_\boldw\text{!}\) Instead, \(X_\boldw\) is what is called the translate of the subspace \(\NS T\) by the vector \(\boldv_p\text{,}\) and is denoted as \(X_\boldw=\boldv_p+\NS T\text{.}\) The corollary below is an application of this observation to solutions to matrix equations (equivalently, linear systems). It is obtained by treating the special case of Theorem 3.4.14 where \(T=T_A\) is a matrix transformation.
Let’s use Sage and Corollary 3.4.16 to find the set of solutions \(S\subseteq \R^5\) to the matrix equation
\begin{equation} \begin{amatrix}[rrrrr] 0\amp 0\amp -2\amp 0\amp 7\\ 2\amp 4\amp -10\amp 6\amp 12\\ 2\amp 4\amp -5\amp 6\amp -5 \end{amatrix} \begin{amatrix}[c] x_1\\ x_2\\ x_3\\ x_4\\ x_5 \end{amatrix}= \begin{amatrix}[r] 12\\ 28\\ -1 \end{amatrix}\text{.}\tag{3.4.5} \end{equation}
This is the matrix equation form of the linear system we investigated in Sage example 2. The method solve_right can be used to find a particular solution \(\boldx_p\) to (3.4.5).
We get the entire set of solutions \(S\) by translating \(\NS A\) by the particular solution \(\boldx_p\text{:}\)
\begin{equation*} S=\{\boldx_p+\boldu\colon \boldu\in \NS A\}=\boldx_p+\NS A\text{.} \end{equation*}
We can illustrate this in Sage by taking random elements of \(\NS A\) (computed using right_kernel), adding them to xp, and verifying that the result is a solution to (3.4.5). Each time you evaluate the cell below, a randomly generated element of \(S\) will be outputted.
You may wonder just how random these elements of \(S\) are, considering that the entries always seem to be integers! Indeed, soliciting information about NS from Sage, we see that it has the structure of a “free module” defined over the the “Integer Ring”.
Without getting too far into the weeds, this is a result of our initial definition of \(A\) using Matrix(). Without further information, Sage interprets this as a matrix with integer coefficients, as opposed to real coefficients. All further computations (e.g., xp and NS) are done in a similar spirit. More precisely, the object NS generated by Sage consists of all integer linear combinations of the two rows in the “echelon basis matrix” displayed in the cell above. The next cell shows you how things change when we alert Sage to the fact that we are dealing with matrices over the reals. The only change is adding RR to Matrix(), which specifies that matrix coefficients should be understood as real numbers.
Our final example uses Corollary 3.4.16 to nicely round out the discussion of lines and planes in \(\R^2\) and \(\R^3\) begun in Example 3.3.14. The conclusion is that although it is not the case that all lines and planes are subspaces, it is the case that they are translates of subspaces.

Example 3.4.17. Lines and planes (again).

Let \(\ell\colon ax+by=c\) be a line in \(\R^2\text{.}\) This line does not necessarily pass through the origin, but the line \(\ell_0\colon ax+by=0\) does. Using linear algebra we recognize \(\ell\) and \(\ell_0\) as the solutions to the matrix equations \(A\boldx=c\) and \(A\boldx=0\text{,}\) respectively, where
\begin{equation*} A=\begin{bmatrix}a\amp b \end{bmatrix}\text{.} \end{equation*}
Furthermore we see that \(\ell_0\) is none other than \(\NS A\text{.}\) Now, fix a particular point \(P=(x_0,y_0)\in \ell\text{.}\) Since \((x_0, y_0)\) is a solution to \(A\boldx=c\text{,}\) according to Corollary 3.4.16 we have
\begin{equation*} \ell=\{P+Q\colon Q=(x,y)\in \NS A\}=\{P+Q\colon Q\in \ell_0\}\text{.} \end{equation*}
In other words we see that \(\ell\) is just the translate of the line \(\ell_0\) by the vector \(\vec{OP}=(x_0,y_0)\text{.}\) Since \(\ell_0=\NS A\) is a subspace, we conclude that all lines in \(\R^2\) are translates of a subspace.
A very similar argument can be given for an arbitrary plane \(\mathcal{P}\colon ax+by+cz=d\) in \(\R^3\) to show that it is a translate of of \(\mathcal{P}_0\colon ax+bx+cz=0\text{,}\) which itself is a subspace.

Exercises 3.4.3 Exercises

True/False Questions

1.
Let \(A=\left[\begin{matrix} 1 \amp 3 \amp -8 \cr 0 \amp 1 \amp -3 \cr \end{matrix}\right]\text{,}\) \({\bf b}=\left[\begin{matrix} -1 \cr 3 \cr 1 \cr \end{matrix}\right]\text{,}\) and \({\bf c}=\left[\begin{matrix} 1 \cr 0 \cr \end{matrix}\right]\text{.}\) Define \(T({\bf x})=A{\bf x}\text{.}\)
Select true or false for each statement.
  1. The vector \({\bf c}\) is in the range of \(T\)
  2. The vector \({\bf b}\) is in the kernel of \(T\)
Solution.
\(A{\bf b}=\left[\begin{matrix} 1 \amp 3 \amp -8 \cr 0 \amp 1 \amp -3 \cr \end{matrix}\right] \left[\begin{matrix} -1 \cr 3 \cr 1 \cr \end{matrix}\right]= \left[\begin{matrix} 0 \cr 0 \cr \end{matrix}\right]\text{,}\) so \({\bf b}\in \ker(T)\text{.}\) We row-reduce to determine a solution of \(A{\bf x}={\bf c}\text{.}\) \(\left[\begin{matrix} 1 \amp 3 \amp -8 \amp 1 \cr 0 \amp 1 \amp -3 \amp 0 \cr \end{matrix}\right]\sim \left[\begin{matrix} 1 \amp 0 \amp 1 \amp 1 \cr 0 \amp 1 \amp -3 \amp 0 \cr \end{matrix}\right].\) Thus \(A \left[\begin{matrix} 1 \cr 0 \cr 0 \cr \end{matrix}\right] = {\bf c}\text{,}\) so \({\bf c}\in\) range\((T)\text{.}\)
2.
Let \(T({\bf x})=A{\bf x}\) for the matrix \(A\) and
\(A=\left[\begin{array}{ccc} 1 \amp 0 \amp -1\cr 5 \amp -5 \amp 5\cr -7 \amp -4 \amp 15 \end{array}\right]\text{,}\) \({\bf b}=\left[\begin{array}{c}1\\2\\1\\\end{array}\right]\text{,}\) and \({\bf c}=\left[\begin{array}{c}0\\-5\\-4\\\end{array}\right]\text{.}\)
Select true or false for each statement.
The vector \(\bf{b}\) is in the kernel of \(T\text{.}\)
  • ???
  • True
  • False
The vector \(\bf{c}\) is in the range of \(T\text{.}\)
  • ???
  • True
  • False
Answer 1.
\(\text{True}\)
Answer 2.
\(\text{True}\)
Solution.
\(A{\bf b}=\left[\begin{array}{ccc} 1 \amp 0 \amp -1\cr 5 \amp -5 \amp 5\cr -7 \amp -4 \amp 15 \end{array}\right] \left[\begin{array}{c}1\\2\\1\\\end{array}\right]= \left[\begin{array}{c}0\\0\\0\\\end{array}\right] ={\bf 0}\text{,}\) so \({\bf b}\in \ker(T)\text{.}\) Row-reduce to determine a solution of \(A{\bf x}={\bf c}\text{.}\) \(\left[\begin{array}{cccc} 1 \amp 0 \amp -1 \amp 0\cr 5 \amp -5 \amp 5 \amp -5\cr -7 \amp -4 \amp 15 \amp -4 \end{array}\right] \sim \left[\begin{array}{cccc} 1 \amp 0 \amp -1 \amp 0\cr 0 \amp 1 \amp -2 \amp 1\cr 0 \amp 0 \amp 0 \amp 0 \end{array}\right]\text{.}\) or simply notice that \(\bf{c}\) is the second column of \(\bf{A}\text{.}\) Row reduction shows that the equation \(\bf{A}\bf{x}=\bf{c}\) has infinitely many solutions: \(\bf{x} = \left[\begin{array}{c}0\\1\\0\\\end{array}\right] + t \left[\begin{array}{c}1\\2\\1\\\end{array}\right]\) for arbitrary \(t \in \mathbb{R}\text{,}\) and both approaches lead to the conclusion that \(A \left[\begin{array}{c}0\\1\\0\\\end{array}\right] = {\bf c}\text{.}\) Thus \({\bf c}\in\) range\((T)\text{.}\)
3.
If \(T:{\mathbb R}^4\to {\mathbb R}^3\) is a linear transformation, then consider whether the set ker (\(T\) ) is a subspace of \({\mathbb R}^{4}\text{.}\)
Select true or false for each statement. first problem looking at subspaces
  1. This set is a subspace of \({\mathbb R}^4\)
  2. This set contains the zero vector and is closed under vector addition and scalar multiplication.
  3. This set is a subset of the domain.
  4. This set is a subset of the codomain
Solution.
\({\mathbb R}^4\text{,}\) that contains the zero vector, and is closed under vector addition and scalar multiplication.
4.
Let \(A=\left[\begin{matrix} 3 \amp 5 \cr 3 \amp 2 \cr -7 \amp -4\cr \end{matrix}\right]\text{,}\) \({\bf b}=\left[\begin{matrix} 3 \cr -4 \cr \end{matrix}\right]\text{,}\) and \({\bf c}=\left[\begin{matrix} 1 \cr 6 \cr \end{matrix}\right]\text{.}\) Define \(T({\bf x})=A{\bf x}\text{.}\)
Select true or false for each statement.
  1. The vector \({\bf b}\) is in the kernel of \(T\)
  2. The vector \({\bf c}\) is in the range of \(T\)
Solution.
\(A{\bf b}=\left[\begin{matrix} 3 \amp 5 \cr 3 \amp 2 \cr -7 \amp -4\cr \end{matrix}\right] \left[\begin{matrix} 3 \cr -4 \cr \end{matrix}\right]= \left[\begin{matrix} -11 \cr 1 \cr -5 \cr \end{matrix}\right] \ne {\bf 0}\text{,}\) so \({\bf b}\not\in \ker(T)\text{.}\) Since the range of \(T\) is a subset of \({\mathbb R}^3\) and \({\bf c}\in{\mathbb R}^2\text{,}\) \({\bf c}\not\in\) range\((T)\text{.}\)
5.
If \(T:{\mathbb R}^6\to {\mathbb R}^3\) is a linear transformation, then select true or false for each statement about the set \(\ker( T)\text{.}\)
  1. This set contains the zero vector and is closed under vector addition and scalar multiplication.
  2. This set is a subset of the codomain.
  3. This set is a subspace of \(\mathbb{R}^3\text{.}\)
  4. This set is a subset of the domain.
Solution.
\({\mathbb R}^6\text{,}\) not the codomain \({\mathbb R}^3\text{.}\)

WeBWork Exercises

6.
Let \(T\) be a one-to-one linear transformation from \({\mathbb R}^r\) to \({\mathbb R}^s\text{.}\)
  1. What can one say about the relationship between \(r\) and \(s\text{.}\)
  1. \(\displaystyle r\lt s\)
  2. \(\displaystyle r\leq s\)
  3. \(\displaystyle r\geq s\)
  4. \(\displaystyle r>s\)
  5. There is not enough information to tell
7.
Let \(T\) be an onto linear transformation from \({\mathbb R}^r\) to \({\mathbb R}^s\text{.}\)
  1. What can one say about the relationship between \(r\) and \(s\text{.}\)
  1. \(\displaystyle r\geq s\)
  2. \(\displaystyle r>s\)
  3. \(\displaystyle r\leq s\)
  4. \(\displaystyle r\lt s\)
  5. There is not enough information to tell
8.
Let \(T\) be an linear transformation from \({\mathbb R}^r\) to \({\mathbb R}^s\text{.}\) Let \(A\) be the matrix associated to \(T\text{.}\)
Fill in the correct answer for each of the following situations.
  1. Every column in the row-echelon form of \(A\) is a pivot column.
  2. Two columns in the row-echelon form of \(A\) are not pivot columns.
  3. The row-echelon form of \(A\) has a column corresponding to a free variable.
  4. The row-echelon form of \(A\) has no column corresponding to a free variable.
  1. T is not one-to-one
  2. T is one-to-one
  3. There is not enough information to tell.
9.
Let \(T\) be a linear transformation from \({\mathbb R}^3\) to \({\mathbb R}^3\) .
Determine whether or not \(T\) is onto in each of the following situations:
  1. Suppose \(T(4, -4, 3)=u\text{,}\) \(T(-2, 4, 2)=v\text{,}\) \(T(2, 1, 5)=u+v\text{.}\)
  2. Suppose \(T(a) = u\text{,}\) \(T(b) = v\text{,}\) \(T(c)=u+v\text{,}\) where \(a, b, c, u,v\) are vectors in \({\mathbb R}^3\text{.}\)
  3. Suppose \(T\) is a one-to-one function
  1. T is onto.
  2. T is not onto.
  3. There is not enough information to tell
10.
Match the following concepts with the correct definitions:
  1. \(f\) is an onto function from \({\mathbb R}^3\) to \({\mathbb R}^3\)
  2. \(f\) is a one-to-one function from \({\mathbb R}^3\) to \({\mathbb R}^3\)
  3. \(f\) is a function from \({\mathbb R}^3\) to \({\mathbb R}^3\)
  1. For every \(y\in {\mathbb R}^3\text{,}\) there is a \(x\in {\mathbb R}^3\) such that \(f(x)=y\text{.}\)
  2. For every \(x\in {\mathbb R}^3\text{,}\) there is a \(y\in {\mathbb R}^3\) such that \(f(x)=y\text{.}\)
  3. For every \(y\in {\mathbb R}^3\text{,}\) there is a unique \(x\in {\mathbb R}^3\) such that \(f(x)=y\text{.}\)
  4. For every \(y \in {\mathbb R}^3\text{,}\) there is at most one \(x\in {\mathbb R}^3\) such that \(f(x)=y\text{.}\)
11.
Let \(T\) be an linear transformation from \({\mathbb R}^r\) to \({\mathbb R}^s\text{.}\) Let \(A\) be the matrix associated to \(T\text{.}\)
Fill in the correct answer for each of the following situations.
  1. Two rows in the row-echelon form of \(A\) do not have pivots.
  2. The row-echelon form of \(A\) has a row of zeros.
  3. Every row in the row-echelon form of \(A\) has a pivot.
  4. The row-echelon form of \(A\) has a pivot in every column.
  1. T is onto
  2. T is not onto
  3. There is not enough information to tell.
12.
Let \(T: {\mathbb R}^3 \rightarrow {\mathbb R}^3\) be the linear transformation defined by
\begin{equation*} T(x_1, x_2, x_3 )= (x_1- x_2, x_2- x_3, x_3-x_1). \end{equation*}
Find a vector \(\vec{w} \in {\mathbb R}^3\) that is NOT in the image of \(T\text{.}\)
\(\vec{w} =\) (3 × 1 array)
and find a different, nonzero vector \(\vec{v} \in {\mathbb R}^3\) that IS in the image of \(T\text{.}\)
\(\vec{v} =\) (3 × 1 array).
13.
Let
\begin{equation*} A = \left[\begin{array}{cc} 6 \amp -1\cr 5 \amp 4\cr 3 \amp -15 \end{array}\right] \ \mbox{ and } \ \vec{b} = \left[\begin{array}{c} 22\cr 28\cr -18 \end{array}\right]. \end{equation*}
Define the linear transformation \(T: {\mathbb R}^2 \rightarrow {\mathbb R}^3\) by \(T(\vec{x}) = A\vec{x}\text{.}\) Find a vector \(\vec{x}\) whose image under \(T\) is \(\vec{b}\text{.}\)
\(\vec{x} =\) (2 × 1 array).
Is the vector \(\vec{x}\) unique?
  • choose
  • unique
  • not unique
Answer.
\(\text{unique}\)

Exercise Group.

For each linear transformation \(T\) give parametric descriptions of \(\NS T\) and \(\im T\text{.}\) To do so you will want to relate each computation to a system of linear equations. (See Example 3.4.12 for an example of computing an image.)
14.
\begin{align*} T\colon \R^4 \amp \rightarrow \R^3 \\ (x,y,z,w)\amp\mapsto (x+z+w, x-y-z,-2x+y-w) \end{align*}
15.
\begin{align*} T\colon M_{22}\amp \rightarrow M_{22} \\ A \amp\mapsto \begin{amatrix}[rr]1\amp 1\\ 1\amp 1 \end{amatrix}A \end{align*}
16.
\begin{align*} T\colon P_2\amp \rightarrow P_2 \\ f \amp\mapsto f(x)+f'(x) \end{align*}

Exercise Group.

For the given linear transformation \(T\) prove the claim about \(\im T\text{.}\)
17.
\begin{align*} T\colon \R^\infty \amp \rightarrow \R^\infty \\ s=(a_1,a_2,\dots)\amp\mapsto T(s)=(a_2,a_3,\dots) \text{.} \end{align*}
Claim: \(\im T=\R^\infty\)
18.
\begin{align*} T\colon C(\R) \amp \rightarrow C(\R) \\ f(x)\amp\mapsto g(x)=f(x)+f(-x)\text{.} \end{align*}
Claim: \(\im T\) is the set of all continuous symmetric functions. In other words,
\begin{equation*} \im T=\{f\in C(\R)\colon f(-x)=f(x)\}. \end{equation*}

19.

Define \(T\colon P_5\rightarrow \R^3\) as \(T(p(x))=(p(-2), p(3), p(7))\text{:}\) i.e., the value of \(T\) at the input polynomial \(p(x)\in P_5\) is computed by evaluating \(p\) at the inputs \(x=-2,3,7\text{.}\)
  1. Prove: \(T\) is a linear transformation.
  2. Prove: the \(W=\{p(x)\in P_5\colon p(-2)=p(3)=p(7)=0\}\) is a subspace of \(P_5\text{.}\)

Exercise Group.

For each subset \(W\subseteq V\) show \(W\) is a subspace by identifying it with the null space of a linear transformation \(T\text{.}\) You may use any of the examples from Section 3.2, and any of the results from the exercises in Exercises 3.2.6.
20.
\begin{equation*} W=\{A\in M_{nn}\colon \tr A=0\} \end{equation*}
21.
\begin{equation*} W=\{A\in M_{nn}\colon A^T=-A\} \end{equation*}
22.
\begin{equation*} W=\{f\in C^2(\R)\colon f''=2f'-3f\} \end{equation*}

Exercise Group.

For each \(m\times n\) matrix \(A\) and vector \(\boldb\in \R^m\text{:}\)
  1. Find a particular solution \(\boldx_p\) to \(A\boldx=\boldb\text{.}\)
  2. Find all solutions to the corresponding homogeneous matrix equation \(A\boldx=\boldzero\text{.}\)
  3. Use (a), (b), and Corollary 3.4.16 to describe all solutions to \(A\boldx=\boldb\text{.}\)
23.
\begin{equation*} A=\begin{amatrix}[rrrr]1\amp 2\amp 1\amp 1\\ 1\amp 1\amp 2\amp 3 \end{amatrix}, \boldb=(3,2) \end{equation*}
24.
\begin{equation*} A=\begin{amatrix}[rrr]1\amp 1\amp -3 \\ 3\amp -1\amp -1 \\ 1\amp 0\amp -1 \end{amatrix}, \boldb=(2,1,-4) \end{equation*}
25.
\begin{equation*} A=\begin{amatrix}[rrr]1\amp 2\amp 1 \\ 1\amp 1\amp -3 \\ 1\amp 0\amp -1 \end{amatrix}, \boldb=(1,1,1) \end{equation*}