Skip to main content
Logo image

Section 3.8 Rank-nullity theorem and fundamental spaces

This section is in a sense just a long-format example of how to compute bases and dimensions of subspaces. Along the way, however we meet the rank-nullity theorem (sometimes called the “fundamental theorem of linear algebra”), and apply this theorem in the context of fundamental spaces of matrices (Definition 3.8.5).

Subsection 3.8.1 The rank-nullity theorem

The rank-nullity theorem relates the the dimensions of the null space and image of a linear transformation \(T\colon V\rightarrow W\text{,}\) assuming \(V\) is finite dimensional. Roughly speaking, it says that the bigger the null space, the smaller the image. More precisely, it tells us that
\begin{equation*} \dim V=\dim\NS T+\dim\im T\text{.} \end{equation*}
As we will see, this elegant result can be used to significantly simplify computations with linear transformations. For example, in a situation where we wish to compute explicitly both the null space and image of a given linear transformation, we can often get away with just computing one of the two spaces and using the rank-nullity theorem (and a dimension argument) to easily determine the other. Additionally, the rank-nullity theorem will directly imply some intuitively obvious properties of linear transformations. For example, suppose \(V\) is a finite-dimensional vector space. It seems obvious that if \(\dim W> \dim V\text{,}\) then there is no linear transformation mapping \(V\) surjectively onto \(W\text{:}\) i.e., you should not be able to map a “smaller” vector space onto a “bigger” one. Similarly, if \(\dim W \lt \dim V\text{,}\) then we expect that there is no injective linear transformation mapping \(V\) injectively into \(W\text{.}\) Both these results are easy consequences of the rank-nullity theorem.
Before proving the theorem we give names to \(\dim \NS T\) and \(\dim\im T\text{.}\)

Definition 3.8.1. Rank and nullity.

Let \(T\colon V\rightarrow W\) be a linear transformation.
  • The rank of \(T\text{,}\) denoted \(\rank T\text{,}\) is the dimension of \(\im T\text{:}\) i.e.,
    \begin{equation*} \rank T=\dim\im T\text{.} \end{equation*}
  • The nullity of \(T\text{,}\) denoted \(\nullity T\text{,}\) is the dimension of \(\NS T\text{:}\) i.e.,
    \begin{equation*} \nullity T=\dim\NS T\text{.} \end{equation*}
Choose a basis \(B'=\{\boldv_1, \boldv_2, \dots, \boldv_k\}\) of \(\NS T\) and extend \(B'\) to a basis \(B=\{\boldv_1, \boldv_2,\dots, \boldv_k,\boldv_{k+1},\dots, \boldv_n\}\text{,}\) using Theorem 3.7.8. Observe that \(\dim\NS T=\nullity T=k\) and \(\dim V=n\text{.}\)
We claim that \(B''=\{T(\boldv_{k+1}),T(\boldv_{k+2}),\dots, T(\boldv_{n})\}\) is a basis of \(\im T\text{.}\)
\(B''\) is linearly independent.
Suppose \(a_kT(\boldv_k)+a_{k+1}T(\boldv_{k+1})+\cdots +a_nT(\boldv_n)=\boldzero\text{.}\) Then the vector \(\boldv=a_k\boldv_k+a_{k+1}\boldv_{k+1}+\cdots +a_n\boldv_n\) satisfies \(T(\boldv)=\boldzero\) (using linearity of \(T\)), and hence \(\boldv\in \NS T\text{.}\) Then, using the fact that \(B'\) is a basis of \(\NS T\text{,}\) we have
\begin{equation*} b_1\boldv_1+b_2\boldv_2+\cdots +\boldv_k=\boldv=a_k\boldv_k+a_{k+1}\boldv_{k+1}+\cdots +a_n\boldv_n, \end{equation*}
and hence
\begin{equation*} b_1\boldv_1+b_2\boldv_2+\cdots +\boldv_k-a_k\boldv_k-a_{k+1}\boldv_{k+1}-\cdots -a_n\boldv_n=\boldzero. \end{equation*}
Since the set \(B\) is linearly independent, we conclude that \(b_i=a_j=0\) for all \(1\leq i\leq k\) and \(k+1\leq j\leq n\text{.}\) In particular, \(a_{k+1}=a_{k+2}=\cdots=a_n=0\text{,}\) as desired.
\(B''\) spans \(\im T\).
It is clear that \(\Span B''\subseteq \im T\) since \(T(\boldv_i)\in \im T\) for all \(k+1\leq i\leq n\) and \(\im T\) is closed under linear combinations.
For the other direction, suppose \(\boldw\in \im T\text{.}\) Then there is a \(\boldv\in V\) such that \(\boldw=T(\boldv)\text{.}\) Since \(B\) is a basis of \(V\) we may write
\begin{equation*} \boldv=a_1\boldv_1+a_2\boldv_2+\cdots a_k\boldv_k+a_{k+1}\boldv_{k+1}+\cdots +a_n\boldv_n\text{,} \end{equation*}
in which case
\begin{align*} \boldw=T(\boldv)\amp= T(a_1\boldv_1+a_2\boldv_2+\cdots a_k\boldv_k+a_{k+1}\boldv_{k+1}+\cdots +a_n\boldv_n)\\ \amp=a_1T(\boldv_1)+a_2T(\boldv_2)+\cdots a_kT(\boldv_k)+a_{k+1}T(\boldv_{k+1})+\cdots +a_nT(\boldv_n) \amp (T \text{ is linear })\\ \amp=\boldzero +a_kT(\boldv_k)+a_{k+1}T(\boldv_{k+1})+\cdots +a_nT(\boldv_n) \amp (\boldv_i\in\NS T \text{ for } 1\leq i\leq k) \text{.} \end{align*}
This shows that \(\boldw=a_kT(\boldv_k)+a_{k+1}T(\boldv_{k+1})+\cdots +a_nT(\boldv_n)\in \Span B''\text{,}\) as desired.
Having shown \(B''\) is a basis for \(\im T\text{,}\) we conclude that \(\dim \im T=\val{B''}=n-(k+1)+1=n-k\text{,}\) and thus that
\begin{align*} \dim V \amp=k+(n-k) \\ \amp=\dim\NS T+\dim \im T \\ \amp = \nullity T+\rank T\text{.} \end{align*}

Example 3.8.3. Rank-nullity: verification.

Verify the rank-nullity theorem for the linear transformation \(T\colon \R^3\rightarrow \R^2\) defined as \(T(x,y,z)=(x+y+z,x+y+z)\text{.}\)
Solution.
To verify the rank-nullity theorem, we must compute bases for \(\NS T\) and \(\im T\text{.}\) Consider first \(\NS T\text{.}\) We have
\begin{align*} \NS T \amp =\{(x,y,z)\colon x+y+z=0\}\\ \amp =\{(-s-t,s,t)\colon s,t\in\R\} \text{.} \end{align*}
Here the parametric description is obtained using our usual technique for solving systems of equations (Procedure 1.3.5). From the parametric description, it is clear that the set \(B=\{(-1,1,0), (-1,0,1)\}\) spans \(\NS T\text{.}\) Since \(B\) is clearly linearly independent, it is a basis for \(\NS T\text{,}\) and we conclude that \(\dim \NS=\val{B}=2\text{.}\) (Alternatively, the equation \(x+y+z=0\) defines a plane passing through the origin in \(\R^3\text{,}\) and we know such subspaces are of dimension two. )
Next it is fairly clearly that \(\im T=\{(t,t)\colon t\in \R\}=\Span\{(1,1)\}\text{.}\) Thus \(B'=\{(1,1)\}\) is a basis for \(\im T\) and \(\dim\im T=\val{B'}=1\text{.}\)
Finally we observe that
\begin{equation*} \nullity T+\rank T=\dim\NS T+\dim\im T=2+1=3=\dim \R^3, \end{equation*}
as predicted by the rank-nullity theorem.

Example 3.8.4. Rank-nullity: application.

Show that the linear transformation
\begin{align*} T\colon \R^4 \amp\rightarrow \R^3 \\ (x,y,z,w)\amp \mapsto (x+z,y+z+w,z+2w) \end{align*}
is surjective: i.e., \(\im T=\R^3\text{.}\) Do so by first computing \(\NS T\text{.}\)
Solution.
We first examine \(\NS T\text{.}\) We have
\begin{equation*} T(x,y,z,w)=\boldzero \iff \begin{linsys}{4} x\amp \amp \amp \amp \amp + \amp w\amp =\amp 0\\ \amp \amp y\amp+ \amp z \amp + \amp w\amp =\amp 0\\ \amp \amp \amp\amp z \amp + \amp 2w\amp =\amp 0 \end{linsys}\text{.} \end{equation*}
The system above is already in row echelon form, and so we easily see that
\begin{equation*} \NS T=\{(-t,t,-2t,t)\colon t\in \R\}=\Span\{(-1,1,-2,1)\}\text{.} \end{equation*}
Thus \(B=\{(-1,1,-2,1)\}\) is a basis of \(\NS T\text{,}\) and we conclude that \(\dim \NS T=1\text{.}\) The rank-nullity theorem now implies that
\begin{equation*} \dim \im T=4-\dim \NS T=4-1=3\text{.} \end{equation*}
Since \(\im T\subseteq \R^3\) and \(\dim\im T=\dim \R^3=3\text{,}\) we conclude by Corollary 3.7.10 that \(\im T=\R^3\text{.}\) Thus \(T\) is surjective.

Subsection 3.8.2 Fundamental spaces of matrices

We now treat the special case of matrix transformations \(T_A\colon \R^n\rightarrow \R^m\text{.}\) The fundamenal spaces of a matrix \(A\) defined below are can each be connected to \(\NS T_A\) and \(\im T_A\text{,}\) and hence the rank-nullity theorem comes to bear in their analysis. Observe that \(\NS A\) was defined previously (3.4.5). We include it below to gather all the fundamental spaces together under one definition.

Definition 3.8.5. Fundamental spaces.

Let \(A\) be a an \(m\times n\) matrix. Let \(\boldr_1,\dots, \boldr_m\) be the \(m\) rows of \(A\text{,}\) and let \(\boldc_1,\dots \boldc_n\) be its \(n\) columns. The following subspaces are called the fundamental subspaces of \(A\).
  • The null space of \(A\), denoted \(\NS A\) is defined as
    \begin{equation*} \NS A =\{\boldx\in\R^n\colon A\boldx=\boldzero\}\subseteq \R^n.\text{.} \end{equation*}
  • The row space of \(A\), denoted \(\RS A\text{,}\) is defined as
    \begin{equation*} \RS A=\Span \{\boldr_1, \boldr_2, \dots, \boldr_m\}\subseteq \R^n\text{.} \end{equation*}
  • The column space of \(A\), denoted \(\CS A\text{,}\) is defined as
    \begin{equation*} \CS A=\Span \{\boldc_1, \boldc_2, \dots, \boldc_n\}\subseteq \R^m\text{.} \end{equation*}
The rank and nullity of \(A\text{,}\) denoted \(\rank A\) and \(\nullity A\text{,}\) respectively, are defined as \(\rank A=\dim \CS A\) and \(\nullity A=\dim\NS A\text{.}\)
How do the fundamental spaces of a matrix \(A\) relate to its associated matrix transformation \(T_A\text{?}\) It is easy to see that \(\NS T_A=\NS A\text{,}\) and indeed we made this connection in Section 3.4. What about \(\im T_A\text{?}\) We claim that \(\im T_A=\CS A\text{.}\) To see why, let \(\boldc_1, \boldc_2,\dots, \boldc_n\) be the columns of \(A\) and consider the following chain of equivalences:
\begin{align*} \boldy\in \im T_A \amp\iff \boldy=T_A(\boldx) \text{ for some } \boldx\in \mathbb{R}^n \\ \amp\iff \boldy=A \boldx \text{ for some } \boldx=(x_1,x_2,\dots, x_n)\in \mathbb{R}^n \amp (\text{definition of } T_A) \\ \amp \iff \boldy=x_1\boldc_1+x_2\boldc_2+\cdots +x_n\boldc_n \text{ for some } x_i\in \mathbb{\R} \amp (\knowl{./knowl/th_column_method.html}{\text{Theorem 2.1.24}})\\ \amp \iff \boldy\in\CS A\text{.} \end{align*}
We now highlight three equivalent statments in this chain
\begin{equation*} \boldy\in\CS A \iff \boldy=A \boldx \text{ for some } \boldx\in \R^n \iff \boldy\in \im\CS A\text{.} \end{equation*}
The first equivalence tells us that \(\CS A\) is the set of \(\boldy\in \R^m\) for which the matrix equation \(A\boldx=\boldy\) is consistent. The second equivalence tells us that \(\CS A=\im T_A\text{.}\) In sum, we have proven the following result.
The next theorem indicates how row reduction affects fundamental spaces.
First observe that \(A=\begin{bmatrix}1\amp 1\\ 1\amp 1 \end{bmatrix}\) is row equivalent to \(B=\begin{bmatrix}1\amp 1\\ 0 \amp 0\end{bmatrix}\) and yet \(\CS A=\Span\{(1,1)\}=\{(t,t)\colon t\in \R\}\ne \CS B=\Span\{(1,0)\}=\{(t,0)\colon t\in\R\}\text{.}\) Thus we do not have \(\CS A=\CS B\) in general.
We now turn to the equalities (3.8.4)–(3.8.6). Assume that \(A\) is row equivalent to \(B\text{.}\) Using the formulation of row reduction in terms of multiplication by elementary matrices, we see that there is an invertible matrix \(Q\) such that \(B=QA\text{,}\) and hence also \(A=Q^{-1}B\text{.}\) But then we have
\begin{align*} \boldx\in\NS A\amp\iff A\boldx=\boldzero \\ \amp\iff QA\boldx=Q\boldzero \amp (\knowl{./knowl/th_inverse_cancel.html}{\text{Theorem 2.3.3}}) \\ \amp \iff B\boldx=\boldzero \\ \amp \iff \boldx\in\NS B\text{.} \end{align*}
This proves \(\NS A=\NS B\text{.}\)
Next, by Theorem 3.8.6 we have \(\NS A=\NS T_A\text{,}\) \(\CS A=\im T_A\) and \(\NS B=\NS T_B\text{,}\) \(\CS B=\im T_B\text{.}\) It follows that
\begin{align*} \dim \CS A \amp=\dim\im T_A \\ \amp=n-\dim\NS A \amp (\knowl{./knowl/th_rank-nullity.html}{\text{Theorem 3.8.2}}) \\ \amp=n-\dim \NS B \amp (\NS A=\NS B)\\ \amp =\dim\im T_B \amp ( \knowl{./knowl/th_rank-nullity.html}{\text{Theorem 3.8.2}}\\ \amp =\dim \CS B\text{.} \end{align*}
Lastly, we turn to the row spaces. We will show that each row of \(B\) is an element of \(\RS A\text{,}\) from whence it follows that \(\RS B\subseteq \RS A\text{.}\) Let \(\boldr_i\) be the \(i\)-th row of \(B\text{,}\) and let \(\boldq_i\) be the \(i\)-th column of \(Q\text{.}\) By Theorem 2.1.26, we have \(\boldr_i=\boldq_i A\text{,}\) and furthermore, \(\boldq_i A\) is the linear combination of the rows of \(A\) whose coefficients come from the entries of \(\boldq_i\text{.}\) Thus \(\boldr_i\in\RS A\text{,}\) as desired.
Having shown that \(\RS B\subseteq \RS A\text{,}\) we see that the same argument works mutatis mutandis (swapping the roles of \(A\) and \(B\) and using \(Q^{-1}\) in place of \(Q\)) to show that \(\RS A\subseteq \RS B\text{.}\) We conclude that \(\RS A=\RS B\text{.}\)
Now that we better understand the role row reduction plays in fundamental spaces, we investigate the special case of a matrix in row echelon form.
By Theorem 3.8.7 we know that \(\NS A=\NS U, \RS A=\RS U\text{,}\) and \(\dim\CS A=\dim\CS U\text{.}\) So it is enough to show that \(\dim \NS U=\dim\RS U=r\) and \(\dim \NS U=s\text{.}\)
First, we will show that the \(r\) nonzero rows of \(U\) form a basis for \(\RS U\text{,}\) proving \(\dim\RS U=r\text{.}\) Clearly the nonzero rows span \(\RS U\text{,}\) since any linear combination of all the rows of \(U\) can be expressed as a linear combination of the nonzero rows. Furthermore, since \(U\) is in row echelon form, the staircase pattern of the leading ones appearing in the nonzero rows assures that these row vectors are linearly independent.
Next, we show that the columns of \(U\) containing leading ones form a basis of \(\CS U\text{.}\) Let \(\boldu_{i_1},\dots, \boldu_{i_r}\) be the columns of \(U\) with leading ones, and let \(\boldu_{j_1}, \boldu_{j_2}, \dots, \boldu_{j_s}\) be the columns without leading ones. To prove the \(\boldu_{i_k}\) form a basis for \(\CS U\text{,}\) we will show that given any \(\boldy\in \CS U\) there is a unique choice of scalars \(c_1, c_2,\dots, c_r\) such that \(c_1\boldu_{i_1}+\cdots +c_r\boldu_{i_r}=\boldy\text{.}\) (Recall that the uniqueness of this choice implies linear independence.) Given \(\boldy\in \CS U\text{,}\) we can find \(\boldx\in\R^n\) such that \(U\boldx=\boldy\) (3.8.6), which means the linear system with augmented matrix \([\ U\ \vert \ \boldy]\) is consistent. Using our Gaussian elimination theory (specifically, Procedure 1.3.5), we know that the solutions \(\boldx=(x_1,x_2,\dots, x_n)\) to this system are in 1-1 correspondence with choices for the free variables \(x_{j_1}=t_{j_1}, x_{j_2}=t_{j_2}, \dots, x_{j_s}=t_{j_s}\text{.}\) (Remember that the columns \(\boldu_{j_k}\) without leading ones correspond to the free variables.) In particular, there is a unique solution to \(U\boldx=\boldy\) where we set all the free variables equal to 0. By the column method (Theorem 2.1.24), this gives us a unique linear combination of only the columns \(\boldu_{i_k}\) with leading ones equal to \(\boldy\text{.}\) This proves the claim, and shows that the columns with leading ones form a basis for \(\CS U\text{.}\) We conclude that \(\dim\CS U=r\text{.}\)
Lastly, we have
\begin{align*} \dim \NS U \amp =\dim \NS T_U \\ \amp =n-\dim\im T_U \amp (\knowl{./knowl/th_rank-nullity.html}{\text{3.8.2}}) \\ \amp =n-\dim\CS U \amp (\knowl{./knowl/th_fundspaces_matrixtransform.html}{\text{3.8.6}})\\ \amp =n-r \\ \amp =s \text{,} \end{align*}
where the last equality uses the fact that the sum of the number of columns with leading ones (\(r\)) and the number of columns without leading ones (\(s\)) is \(n\text{,}\) the total number of columns.
Theorem 3.8.9 is now an easy consequence of the foregoing.
We have
\begin{align*} n \amp =\dim\NS T_A+\dim\im T_A \amp (\knowl{./knowl/th_rank-nullity.html}{\text{Theorem 3.8.2}})\\ \amp =\dim\NS A+\dim\CS A \amp \knowl{./knowl/eq_col_image.html}{\text{(3.8.2)}} \\ \amp =\dim\NS A+\dim\RS A \amp \knowl{./knowl/eq_rank_col_row.html}{\text{(3.8.7)}} \\ \amp =\nullity A+\rank A \amp (\knowl{./knowl/d_fundamental_space.html}{\text{Definition 3.8.5}}) \text{.} \end{align*}
We now gather this suite of results into one overall procedure for computing with fundamental spaces.

Video example: fundamental spaces.

Figure 3.8.11. Video: computing fundamental spaces
The results 3.8.6–3.8.9 allow us to add seven more equivalent statements to our invertibility theorem, bringing us to a total of fourteen!

Subsection 3.8.3 Contracting and expanding to bases

Thanks to Theorem 3.7.3 we know that spanning sets can be contracted to bases, and linearly independent sets can be extended to bases; and we have already seen a few instances where this result has been put to good use. However, neither the theorem nor its proof provide a practical means of performing this contraction or extension. We would like a systematic way of determining which vectors to throw out (when contracting), or which vectors to chuck in (when extending). In the special case where \(V=\R^n\) for some \(n\text{,}\) we can adapt Procedure 3.8.10 to our needs.
Let’s see why in both cases the procedure produces a basis of \(\R^n\) that is either a sub- or superset of \(S\text{.}\)
Contracting to a basis.
In this case we have \(\CS A=\Span S=\R^n\text{.}\) Thus \(B\) is a basis for \(\R^n\text{.}\) Since the column space procedure selects columns from among the original columns of \(A\text{,}\) we have \(B\subseteq S\text{,}\) as desired.
Extending to a basis.
Since \(\CS A\) contains \(\bolde_j\) for all \(1\leq j\leq n\text{,}\) we have \(\CS A=\R^n\text{.}\) Thus \(B\) is a basis for \(\R^n\text{.}\) Since the first \(r\) columns of \(A\) are linearly independent (they are the elements of \(S\)), when we row reduce \(A\) to a matrix \(U\) in row echelon form, the first \(r\) columns of \(U\) will contain leading ones. (To see this, imagine row reducing the \(n\times r\) submatrix \(A'\) consisting of the first \(r\) columns of \(A\) to a row echelon matrix \(U'\text{.}\) Since these columns are linearly independent, they already form a basis for \(\CS A'\text{.}\) Thus the corresponding colmns of \(U'\) must all have leading ones. ) It follows that the first \(r\) columns of \(A\) are selected to be in the basis \(B\text{,}\) and hence that \(S\subseteq B\text{,}\) as desired.

Video example: contracting to a basis.

Figure 3.8.14. Video: contracting to a basis

Exercises 3.8.4 Exercises

Webwork Exercises

1.
Suppose that \(A\) is a \(4 \times 8\) matrix that has an echelon form with two zero rows. Find the dimension of the row space of \(A\text{,}\) the dimension of the column space of \(A\text{,}\) and the dimension of the null space of \(A\text{.}\)
The dimension of the row space of \(A\) is .
The dimension of the column space of \(A\) is .
The dimension of the null space of \(A\) is .
Answer 1.
\(2\)
Answer 2.
\(2\)
Answer 3.
\(6\)
Solution.
The dimension of the row space is the number of nonzero rows in the echelon form, or \(4 - 2 = 2.\) The dimension of the column space is the same as the dimension of the row space, and the dimension of the null space is \(8 - 2 = 6.\)
2.
Are the following statements true or false?
  1. \(R^n\) has exactly one subspace of dimension \(m\) for each of \(m = 0,1,2,\ldots, n\) .
  2. Let \(m>n\text{.}\) Then \(U =\) {\(u_1, u_2,\ldots, u_m\)} in \(R^n\) can form a basis for \(R^n\) if the correct \(m-n\) vectors are removed from \(U\text{.}\)
  3. Let \(m\lt n\text{.}\) Then \(U =\) {\(u_1, u_2,\ldots, u_m\)} in \(R^n\) can form a basis for \(R^n\) if the correct \(n-m\) vectors are added to \(U\text{.}\)
  4. The set {0} forms a basis for the zero subspace.
  5. If {\(u_1, u_2, u_3\)} is a basis for \(R^3\text{,}\) then span{\(u_1, u_2\)} is a plane.
3.
Are the following statements true or false?
  1. If \(\ S_1\) and \(\ S_2\) are subspaces of \(R^n\) of the same dimension, then \(S_1=S_2\text{.}\)
  2. If \(\ S =\) span{\(u_1, u_2, u_3\)}, then \(dim(S) = 3\) .
  3. If the set of vectors \(U\) spans a subspace \(S\text{,}\) then vectors can be added to \(U\) to create a basis for \(S\)
  4. If the set of vectors \(U\) is linearly independent in a subspace \(S\) then vectors can be added to \(U\) to create a basis for \(S\)
  5. If the set of vectors \(U\) is linearly independent in a subspace \(S\) then vectors can be removed from \(U\) to create a basis for \(S\text{.}\)
4.
Let
\begin{equation*} A = \left[\begin{array}{ccc} -1 \amp -1 \amp 0\cr 2 \amp 2 \amp 0\cr -3 \amp -3 \amp 0\cr -5 \amp -5 \amp 0 \end{array}\right]. \end{equation*}
Find dimensions of the kernel and image of \(T(\vec{x}) = A \vec{x}\text{.}\)
\({\rm dim } ({\rm Ker } (A)) =\) ,
\({\rm dim } ({\rm Im } (A)) =\) .
Answer 1.
\(2\)
Answer 2.
\(1\)
5.
Let \(f : \mathbb{R}^{3} \to \mathbb{R}^{2}\) be the linear transformation defined by
\begin{equation*} f(x,y,z) = \left[\begin{array}{c} 7\cr 9 \end{array}\right] x + \left[\begin{array}{c} 10\cr 13 \end{array}\right] y + \left[\begin{array}{c} 17\cr 22 \end{array}\right] z. \end{equation*}
Find bases for the kernel and image of \(f\text{.}\) vector 1 
A basis for the kernel of \(f\) is \(\Big\lbrace\) \(\Big\rbrace.\)
A basis for the image of \(f\) is \(\Big\lbrace\) \(\Big\rbrace.\)
Answer 1.
\(\left<-1,-1,1\right>\)
Answer 2.
\(\left<7,9\right>, \left<10,13\right>\)
6.
Let \(L:\mathbb{R}^3 \to \mathbb{R}^3\) be the linear operator defined by
\begin{equation*} L(x) = \left[\begin{array}{c} 25 x_1 + 4 x_2 - 25 x_3\\ 2 x_2\\ 75 x_1 - 75 x_3 \end{array}\right] \end{equation*}
(a) Find the dimension of the range of \(L\text{:}\)
(b) Find the dimension of the kernel of \(L\text{:}\)
(c) Let \(S\) be the subspace of \(\mathbb{R}^3\) spanned by \(-16 e_1\) and \(6 e_2 + 6 e_3\text{.}\) Find the dimension of \(L(S)\text{:}\)
Answer 1.
\(2\)
Answer 2.
\(1\)
Answer 3.
\(2\)
Solution.
Solution: (a) Since \(e_1\text{,}\) \(e_2\text{,}\) \(e_3\) spans \(\mathbb{R}^3\text{,}\) we get that \(L(\mathbb{R}^3)\) is spanned by \(L(e_1)\text{,}\) \(L(e_2)\text{,}\) \(L(e_3)\text{.}\) So
\begin{equation*} L(\mathbb{R}^3) = \mbox{span}\left( L(e_1),L(e_2),L(e_3)\right) = \mbox{span}\left( \left[\begin{array}{c} 25 \\ 0 \\ 75 \end{array}\right], \left[\begin{array}{c} 4 \\ 2 \\ 0 \end{array}\right], \left[\begin{array}{c} -25 \\ 0 \\ -75 \end{array}\right]\right) \end{equation*}
\begin{equation*} = \mbox{span}\left( \left[\begin{array}{c} 1 \\ 0 \\ 3 \end{array}\right], \left[\begin{array}{c} 2 \\ 1 \\ 0 \end{array}\right], \left[\begin{array}{c} -1 \\ 0 \\ -3 \end{array}\right]\right) = \mbox{span}\left( \left[\begin{array}{c} 1 \\ 0 \\ 3 \end{array}\right], \left[\begin{array}{c} 2 \\ 1 \\ 0 \end{array}\right]\right) \end{equation*}
and hence the dimension of the range is 2.
(b) The rank-nullity theorem implies that the dimension of the kernel is \(3-2 = 1\text{.}\)
(c) Notice that
\begin{equation*} L(S) = \mbox{span}\left(L(-16 e_1), L(6 e_2 + 6 e_3)\right) = \mbox{span}\left(L(e_1), L(e_2)+L(e_3)\right) \end{equation*}
\begin{equation*} = \mbox{span}\left( \left[\begin{array}{c} 25 \\ 0 \\ 75 \end{array}\right], \left[\begin{array}{c} -21 \\ 2 \\ -75 \end{array}\right]\right) \end{equation*}
and it is easy to check that these two vectors are linearly independent. Therefore, the dimension of \(L(S)\) is 2.

7.

In this exercise we will show that for any \((a,b,c,d)\in \R^4\text{,}\) there is a polynomial \(p(x)\in P_3\) satisfying
\begin{equation*} p(-1)=a, p(0)=b, p(1)=c, p(2)=d\text{.} \end{equation*}
In other words given any list of values \(a, b, c, d\text{,}\) we can find a polynomial that evaluates to these values at the inputs \(x=-1, 0, 1, 2\text{.}\)
  1. Define \(T\colon P_3\rightarrow \R^4\) by \(T(p(x))=(p(-1), p(0), p(1), p(2))\text{.}\) Show that \(T\) is linear.
  2. Compute \(\NS T\text{.}\) You may use the fact that a polynomial of degree \(n\) has at most \(n\) roots.
  3. Use the rank-nullity theorem to compute \(\dim \im T\text{.}\) Explain why this implies \(\im T=\R^4\)
  4. Explain why the equality \(\im T=\R^4\) is equivalent to the claim we wish to prove.

8.

Use the rank-nullity theorem to compute the rank of the linear transformation \(T\) described.
  1. \(T\colon\R^7\rightarrow M_{32}\text{,}\) \(\nullity T=2\)
  2. \(T\colon P_3\rightarrow \R^5\text{,}\) \(\NS T=\{\boldzero\}\)
  3. \(T\colon P_5 \rightarrow P_5\text{,}\) \(\NS T=P_4\)

9.

For each linear transformation \(T\colon V\rightarrow W\) use the rank-nullity theorem to decide whether \(\im T=W\text{.}\)
  1. \(\displaystyle T\colon \R^4\rightarrow \R^5\)
  2. \(T\colon M_{22}\rightarrow P_2\text{,}\) \(\nullity T=2\)
  3. \(T\colon M_{23}\rightarrow M_{32}\text{,}\) \(\NS T=\{\boldzero\}\)

10.

Let \(A\) be \(m\times n\) with \(n\lt m\text{.}\) Prove that there is a \(\boldb\in\R^m\) such that the system \(A\boldx=\boldb\) is inconsistent.

11.

For each matrix \(A\) (i) row reduce \(A\) to a matrix \(U\) in row echelon form, (ii) compute bases for \(\CS A\) and \(\CS U\text{,}\) (iii) compute \(\dim \CS A\) and \(\dim \CS U\text{,}\)and (iv) decide whether \(\CS A=\CS U\text{.}\)
  1. \begin{equation*} A=\boldzero_{n\times n} \end{equation*}
  2. \begin{equation*} A=\begin{amatrix}[rr]-2 \amp 1\\ 1\amp 5 \end{amatrix} \end{equation*}
  3. \begin{equation*} A=\begin{amatrix}[rrr]1 \amp 1 \amp 3 \\ 1 \amp 2 \amp 1 \\ 1 \amp 3 \amp -1 \end{amatrix} \end{equation*}

12.

Assume \(A\) is invertible, and is row equivalent to the row echelon matrix \(U\text{.}\) Prove: \(\CS A=\CS U\text{.}\)

13.

For each matrix below, (i) compute bases for each fundamental space, (ii) identify these spaces as familiar geometric objects in \(\R^2\) or \(\R^3\text{,}\) and (iii) provide sketches of each space. The sketches of \(\NS A\) and \(\RS A\) should be combined in the same coordinate system.
  1. \begin{equation*} A=\begin{amatrix}[rrr]1\amp 0\amp 0\\ 0\amp 1\amp 0 \end{amatrix} \end{equation*}
  2. \begin{equation*} A=\begin{amatrix}[rrr]1\amp 1\amp -2\\ 3\amp -4\amp 1 \end{amatrix} \end{equation*}
  3. \begin{equation*} A=\begin{amatrix}[rr] 1\amp 2\\ 0\amp 0 \\ -1\amp -2 \end{amatrix} \end{equation*}

14.

For each \(A\) compute bases for each fundamental space. In each case, you can find bases for one of the fundamental spaces by inspection, and then use the rank-nullity theorem to reduce your workload for the other spaces. See first solution for a model example.
  1. \begin{equation*} A=\begin{amatrix}[rrrr]1\amp 1\amp 1\amp 1\\ 1\amp 1\amp 1\amp 1\\ \end{amatrix} \end{equation*}
  2. \begin{equation*} A=\begin{bmatrix}1\amp 1\amp 1\amp 1\\ 2\amp 1\amp 2\amp 1\\ 1\amp 1\amp 1\amp 1 \end{bmatrix} \end{equation*}
Solution.
  1. Clearly, \(B=\{(1,1)\}\) is a basis for \(\CS A\text{,}\) and \(B'=\{(1,1,1,1)\}\) is a basis for \(\RS A\text{.}\) It follows that \(\rank A=1\) and hence \(\nullity A=4-1=3\text{.}\) Thus we need to find three linearly independent elements of \(\NS A\) to find a basis. We can do so by inspection with the help of the column method. Namely, observe that \(\boldv_1=(1,-1,0,0), \boldv_2=(0,1,-1,0), \boldv_3=(0,0,1,-1)\) are all in \(\NS A\) (column method). The location of zeros in these vectors make it clear that \(B''=\{\boldv_1,\boldv_2, \boldv_3\}\) are linearly independent. Since \(\dim NS A=3\text{,}\) and \(\val{B''}=3\text{,}\) we conclude that \(B''\) is a basis of \(\NS A\) (3.7.10).

15.

For each \(A\) use Procedure 3.8.10 to compute bases for each fundamental space.
  1. \begin{equation*} A= \begin{bmatrix}1\amp 2\amp 4\amp 5\\ 0\amp 1\amp -3\amp 0\\ 0\amp 0\amp 1\amp -3\\ 0\amp 0\amp 0\amp 0 \end{bmatrix} \end{equation*}
  2. \begin{equation*} A= \begin{bmatrix}1\amp 2\amp -1\amp 5\\ 0\amp 1\amp 4\amp 3\\ 0\amp 0\amp 1\amp -7\\ 0\amp 0\amp 0\amp 1 \end{bmatrix} \end{equation*}
  3. \begin{equation*} A = \begin{bmatrix}1\amp 4\amp 5\amp 6\amp 9\\ 3\amp -2\amp 1\amp 4\amp -1\\ -1\amp 0\amp -1\amp -2\amp -1\\ 2\amp 3\amp 5\amp 7\amp 8 \end{bmatrix} \end{equation*}

16.

Find the rank and nullity of each matrix by reducing it to row echelon form.
  1. \begin{equation*} A = \begin{amatrix}[rrrrr] 1\amp 0\amp -2\amp 1\amp 0\\ 0\amp -1\amp -3\amp 1\amp 3\\ -2\amp -1\amp 1\amp -1\amp 3\\ 0\amp 1\amp 3\amp 0\amp -4 \end{amatrix} \end{equation*}
  2. \begin{equation*} A = \begin{amatrix}[rrrr] 1\amp 3\amp 1\amp 3\\ 0\amp 1\amp 1\amp 0\\ -3\amp 0\amp 6\amp -1\\ 3\amp 4\amp -2\amp 1\\ 2\amp 0\amp -4\amp -2 \end{amatrix} \end{equation*}

17.

Let \(A\) be an \(n\times n\) matrix.
  1. Prove: \(A^2=\boldzero_{n\times n}\) if and only if \(\CS A\subseteq \NS A\text{.}\)
  2. Construct a \(2\times 2\) matrix \(A\) with \(\NS A=\CS A=\Span\{(1,2)\}\text{.}\) Verify that your \(A\) satisfies \(A^2=\boldzero_{2\times 2}\text{.}\)

18.

Suppose \(A\) is \(m\times n\) with \(m\ne n\text{.}\)
Prove: either the rows of \(A\) are linearly dependent or the columns of \(A\) are linearly dependent.

19.

Prove: \(\nullity A=\nullity A^T\) if and only if \(A\) is a square matrix.

20.

Suppose \(A\) and \(B\) are row equivalent \(m\times n\) matrices. For each \(1\leq j\leq n\) let \(\bolda_j\) and \(\boldb_j\) be the \(j\)-th columns of \(A\) and \(B\text{,}\) respectively.
Prove: columns \(\boldb_{i_1}, \boldb_{i_2}, \dots, \boldb_{i_r}\) of \(B\) are linearly independent if and only if the corresponding columns \(\bolda_{i_1}, \bolda_{i_2},\dots, \bolda_{i_r}\) are linearly independent.
Hint.
By Corollary 2.4.15 there is an invertible \(Q\) such that \(QA=B\text{.}\) Let \(A'\) and \(B'\) be the submatrices of \(A\) and \(B\) obtained by taking columns \(i_1, i_2, \dots, i_r\text{.}\) Show that we still have \(QA'=B'\) and relate linear independence of the columns to solutions of the matrix equations \(A'\boldx=\boldzero\) and \(B'\boldx=\boldzero\text{.}\)

21.

Prove Theorem 3.8.12 as follows.
  1. First show that all three statements of (13) are equivalent, and that all three statements of (14) are equivalent. (Use Theorem 3.7.11.)
  2. Show that statements (8)-(14) are equivalent with the help of Theorem 3.8.9.
  3. Choose a statement from (1)-(7) that can be easily shown to be equivalent to one of the statements from (8)-(14).
/webwork2_files/helpFiles/Entering-Vectors.html