Skip to main content
Logo image

Section 5.1 Coordinate vectors

Suppose \(V\) is an \(n\)-dimensional vector space. Once we choose a basis \(B=\{\boldv_1, \boldv_2, \dots, \boldv_n\}\) of \(V\text{,}\) we know from Theorem 3.6.7 that any \(\boldv\in V\) can be expressed in a unique way as
\begin{equation*} \boldv=c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n, c_i\in \R\text{.} \end{equation*}
Coordinate vectors turn this observation into a computational tool by exploiting the resulting one-to-one correspondence
\begin{equation} \boldv\in V \longleftrightarrow (c_1,c_2,\dots, c_n)\in \R^n\text{.}\tag{5.1.1} \end{equation}
We will use the correspondence (5.1.1) in two distinct ways, as described below.
  1. Given an \(n\)-dimensional vector space \(V\) and basis \(B\text{,}\) the correspondence (5.1.1) allow us to treat elements of the abstract space \(V\) as if they were elements of \(\R^n\text{,}\) and to then make use of our wealth of computational procedures related to \(n\)-tuples.
  2. The correspondence (5.1.1) is also useful when working in \(\R^n\) itself. Namely, there will be situations where it is convenient to represent vectors with a particular nonstandard basis \(B\text{,}\) as opposed to the standard basis \(\{\bolde_1, \bolde_2, \dots, \bolde_n\}\text{.}\) In this setting the correspondence (5.1.1) will be used as a “change of coordinates” technique.

Subsection 5.1.1 Coordinate vectors

Before we can define coordinate vectors we need to define an ordered basis. As the name suggests this is nothing more than a basis along with a particular choice of ordering of its elements: i.e. first element, second element, etc.. In other words, an ordered basis will be a sequence of vectors, as opposed to a set of vectors.

Definition 5.1.1. Ordered bases.

Let \(V\) be a finite-dimensional vector space. An ordered basis of \(V\) is a sequence of distinct vectors \(B=(\boldv_1, \boldv_2, \dots, \boldv_n)\) whose underlying set \(\{\boldv_1, \boldv_2, \dots, \boldv_n\}\) is a basis of \(V\text{.}\)

Remark 5.1.2.

If \(V\) is further given the structure of a (finite-dimensional) inner product space, then we say an ordered basis \(B=(\boldv_1, \boldv_2, \dots, \boldv_n)\) is orthogonal (resp., orthonormal) if its underlying set is orthogonal (resp. orthonormal).

Remark 5.1.3.

A single (unordered) basis \(B=\{\boldv_1, \boldv_2, \dots, \boldv_n\}\) of an \(n\)-dimensional vector space gives rise to \(n!\) different ordered bases: you have \(n\) choices for the first element of the ordered basis, \((n-1)\) choices for the second element, etc..
For example the standard basis \(B=\{\bolde_1, \bolde_2, \bolde_3\}\) of \(\R^3\) gives rise to \(3!=6\) different ordered bases of \(\R^3\text{:}\)
\begin{align*} B_1\amp =(\bolde_1, \bolde_2, \bolde_3) \amp B_2\amp =(\bolde_1, \bolde_3, \bolde_2) \\ B_3\amp=(\bolde_2, \bolde_1, \bolde_3) \amp B_4\amp =(\bolde_2, \bolde_3, \bolde_1) \\ B_5 \amp =(\bolde_3, \bolde_1, \bolde_2) \amp B_6\amp =(\bolde_3, \bolde_2, \bolde_1)\text{.} \end{align*}
By a slight abuse of language we will use “standard basis” to describe both one of our standard unordered bases and the corresponding ordered basis obtained by choosing the implicit ordering of the set descriptions in Remark 3.6.2. For example, \(\{x^2, x, 1\}\) and \((x^2, x, 1)\) will both be called the standard basis of \(P_2\text{.}\)

Definition 5.1.4. Coordinate vectors.

Let \(B=(\boldv_1, \boldv_2,\dots , \boldv_n)\) be an ordered basis for the vector space \(V\text{.}\) According to Theorem 3.6.7, for any \(\boldv\in V\) there is a unique choice of scalars \(c_i\in \R\) satisfying
\begin{equation*} \boldv=c_1\boldv_1+c_2\boldv_2\cdots +c_n\boldv_n\text{.} \end{equation*}
We call the corresponding \(n\)-tuple \((c_1,c_2,\dots, c_n)\) the coordinate vector of \(\boldv\) relative to the basis \(B\), and denote it \([\boldv]_B\text{:}\) i.e.,
\begin{equation*} [\boldv]_B=(c_1,c_2,\dots, c_n)\text{.} \end{equation*}
To compute the coordinate vector of an element \(\boldv\in V\) relative to a given ordered basis \(B=(\boldv_1,\boldv_2,\dots, \boldv_n)\) we must find the scalars \(c_1, c_2, \dots, c_n\) that satisfy the vector equation
\begin{equation*} \boldv=c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n\text{.} \end{equation*}
Usually this is accomplished by reducing this vector equation to an equivalent system of linear equations in the unknowns \(c_i\) and solving using the method of Gaussian elimination. However, there are some cases when we can easily produce the \(c_i\) by inspection: for example when computing with standard bases as the first example below illustrates.
Furthermore, if the given basis happens to be orthogonal (or orthonormal) with respect to some inner product, Theorem 5.1.9 provides an inner product formula for computing the \(c_i\text{.}\)

Example 5.1.5. Standard bases.

Computing coordinate vectors relative to one of our standard bases for \(\R^n\text{,}\) \(M_{mn}\text{,}\) or \(P_{n}\) amounts to just listing the coefficients or entries used to specify the given vector. The examples below serve to illustrate the general method in this setting.
  1. Let \(V=\R^3\) and \(B=(\bolde_1, \bolde_2, \bolde_3)\text{.}\) For any \(\boldv=(a,b,c)\in \R^3\) we have \([\boldv]_{B}=(a,b,c)\text{,}\) since \((a,b,c)=a\bolde_1+b\bolde_2+c\bolde_3\text{.}\)
  2. Let \(V=M_{22}\) and \(B=(E_{11}, E_{12}, E_{21}, E_{22})\text{.}\) For any \(A=\begin{amatrix}[rr]a\amp b\\ c\amp d \end{amatrix}\) we have \([A]_B=(a,b,c,d)\) since
    \begin{equation*} A=aE_{11}+bE_{12}+cE_{21}+dE_{22}\text{.} \end{equation*}

Example 5.1.6. Reorderings of standard bases.

If we choose an alternate ordering of one of the standard bases, the entries of the coordinate vector are reordered accordingly, as illustrated by the examples below.
  1. Let \(V=\R^3\) and \(B=(\bolde_2, \bolde_1, \bolde_3)\text{.}\) Given \(\boldv=(a,b,c)\in \R^3\) we have \([\boldv]_B=(b,a,c)\text{,}\) since
    \begin{equation*} \boldv=b\bolde_2+a\bolde_1+c\bolde_3\text{.} \end{equation*}
  2. Let \(P=P_3\) and \(B=(1,x,x^2, x^3)\text{.}\) Given \(p(x)=ax^3+bx^2+cx+d\) we have \([p(x)]_B=(d, c, b, a)\text{,}\) since
    \begin{equation*} p(x)=(d)1+cx+bx^2+ax^3\text{.} \end{equation*}

Example 5.1.7. Nonstandard bases.

For a nonstandard ordered basis, we compute coordinate vectors by solving a relevant system of linear equations, as the examples below illustrate.
  1. Let \(V=\R^2\text{,}\) \(B=((1,2),(1,1))\text{,}\) and \(\boldv=(3,3)\text{.}\) Compute \([\boldv]_B\text{.}\) More generally, compute \([(a,b)]_B\) for an arbitrary \((a,b)\in \R^2\text{.}\)
  2. Let \(V=P_2\text{,}\) \(B=(x^2+x+1, x^2-x, x^2-1)\text{,}\) and \(p(x)=x^2\text{.}\) Compute \([p(x)]_B\text{.}\) More generally, compute \([ax^2+bx+c]\) for an arbitrary element \(ax^2+bx+c\in P_2\text{.}\)
Solution.
  1. To compute \([(3,3)]_B\) we must find the unique pair \((c_1, c_2)\) satisfying
    \begin{equation*} (3,3)=c_1(1,2)+c_2(1,1)\text{.} \end{equation*}
    By inspection, we see that
    \begin{equation*} (3,3)=3(1,1)=0(1,2)+3(1,1)\text{.} \end{equation*}
    We conclude that
    \begin{equation*} [\boldv]_{B}=(0,3)\text{.} \end{equation*}
    More generally, to compute \([\boldv]_B\) for an arbitrary \(\boldv=(a,b)\in \R^2\text{,}\) we must find the pair \((c_1,c_2)\) satisfying \((a,b)=c_1(1,2)+c_2(1,1)\text{,}\) or equivalently
    \begin{equation*} \begin{linsys}{2} c_1\amp +\amp c_2 \amp =\amp a\\ 2c_1\amp +\amp c_2\amp =\amp b \end{linsys}\text{.} \end{equation*}
    The usual techniques yield the unique solution \((c_1,c_2)=(-a+b,2a-b)\text{,}\) and thus
    \begin{equation*} [\boldv]_B=(-a+b, 2a-b) \end{equation*}
    for \(\boldv=(a,b)\text{.}\)
  2. To compute \([x^2]_B\) we must find the unique triple \((c_1,c_2,c_3)\) satisfying
    \begin{equation*} x^2=c_1(x^2+x+1)+c_2(x^2-x)+c_3(x^2-1)\text{.} \end{equation*}
    The equivalent linear system once we combine like terms and equate coefficients is
    \begin{equation*} \begin{linsys}{3} c_1\amp +\amp c_2\amp +\amp c_3\amp =\amp 1\\ c_1\amp -\amp c_2\amp \amp \amp =\amp 0\\ c_1\amp \amp \amp -\amp c_3\amp =\amp 0\\ \end{linsys}\text{.} \end{equation*}
    The unique solution to this system is \((c_1,c_2,c_3)=(1/3, 1/3, 1/3)\text{.}\) We conclude
    \begin{equation*} [x^2]_B=\frac{1}{3}(1, 1, 1)\text{.} \end{equation*}
    The same reasoning shows that more generally, given polynomial \(p(x)=ax^2+bx+c\text{,}\) we have
    \begin{equation*} [p(x)]_B=\frac{1}{3}(a+b+c, a-2b+c, a+b-2)\text{.} \end{equation*}

Video example: coordinate vectors.

Figure 5.1.8. Video: coordinate vectors
When \(B=(\boldv_1, \boldv_2, \dots, \boldv_n)\) is an orthogonal basis with respect to some inner product \(\langle , \rangle \text{,}\) then by Theorem 4.2.7 we have
\begin{equation*} \boldv=\frac{\langle \boldv,\boldv_1 \rangle }{\langle \boldv_1, \boldv_1\rangle }\boldv_1+\frac{\langle \boldv,\boldv_1 \rangle }{\langle \boldv_1, \boldv_1\rangle }\boldv_1+\cdots +\frac{\langle \boldv,\boldv_n \rangle }{\langle \boldv_n, \boldv_n\rangle }\boldv_n\text{,} \end{equation*}
and thus
\begin{equation*} [\boldv]_B=\left(\frac{\langle \boldv,\boldv_1 \rangle }{\langle \boldv_1, \boldv_1\rangle }, \frac{\langle \boldv,\boldv_2 \rangle }{\langle \boldv_2, \boldv_2\rangle },\dots, \frac{\langle \boldv,\boldv_n \rangle }{\langle \boldv_n, \boldv_n\rangle }\right)\text{.} \end{equation*}
We have thus proved the following theorem.

Example 5.1.10. Orthogonal bases.

Let \(V=\R^2\) and \(B=((1,1),(-1,2))\text{.}\) Find a general formula for \([(a,b)]_B\text{.}\) Note: \(B\) is orthogonal with respect to the weighted dot product
\begin{equation*} \langle (x_1,x_2), (y_1,y_2)\rangle =2x_1y_1+x_2y_2\text{.} \end{equation*}
Solution.
Applying Theorem 5.1.9 to \(B\) and the dot product with weights \(2, 1\text{,}\) for any \(\boldv=(a,b)\) we compute
\begin{align*} [(a,b)]_B \amp =\left(\frac{\langle (a,b), (1,1)\rangle }{\langle (1,1),(1,1) \rangle }, \frac{\langle (a,b), (-1,2)\rangle }{\langle (-1,2),(-1,2) \rangle }\right)\\ \amp=\left(\frac{1}{3}(2a+b),\frac{1}{3}(-a+b) \right) \text{.} \end{align*}
Let’s check our formula with \(\boldv=(3,-3)\text{.}\) The formula yields \([(3,-3)]_B=(1,-2)\text{,}\) and indeed we see that
\begin{equation*} (3,-3)=1(1,1)-2(-1,2)\text{.} \end{equation*}

Subsection 5.1.2 Coordinate vector transformation

The next theorem is the key to understanding the tremendous computational value of coordinate vectors. Here we treat the coordinate vector operation as a function
\begin{align*} [\phantom{v}]_B\colon V\amp \rightarrow \R^n\\ \boldv\amp\mapsto [\boldv]_B\in \R^n \text{.} \end{align*}
Not surprisingly, this turns out to be a linear transformation, which we call a coordinate vector transformation. Furthermore, the correspondence
\begin{equation*} \boldv\in V \longmapsto [\boldv]_B\in \R^n \end{equation*}
is a one-to-one correspondence between \(V\) and \(\R^n\text{,}\) allowing us to identify the vectors \(\boldv\in V\) with \(n\)-tuples in \(\R^n\text{.}\) In the language of Section 3.9, these two facts taken together mean that the coordinate vector transformation is an isomorphism between \(V\) and \(\R^n\text{.}\) Practically speaking, this means any question regarding the vector space structure of \(V\) can be translated to an equivalent question about the vector space \(\R^n\text{.}\) As a result, given any “exotic” vector space \(V\) of finite dimension, once we choose an ordered basis \(B\) of \(V\text{,}\) questions about \(V\) can be answered by taking coordinate vectors with respect to \(B\) and answering the corresponding question in the more familiar setting of \(\R^n\text{,}\) where we have a wealth of computational procedures at our disposal. We memorialize this principle as a mantra.
  1. Suppose \(T(\boldv)=[\boldv]_B=(a_1,a_2,\dots, a_n), T(\boldw)=[\boldw]_B=(b_1, b_2, \dots, b_n)\text{.}\) By definition this means
    \begin{align*} \boldv \amp =\sum_{i=1}^na_i\boldv_i, \amp \boldw\amp =\sum_{i=1}^nb_i\boldv_i\text{.} \end{align*}
    It follows that
    \begin{equation*} c\boldv+d\boldw=\sum_{i=1}^n(ca_i+db_i)\boldv_i\text{,} \end{equation*}
    and hence
    \begin{align*} T(c\boldv+d\boldw)\amp =[c\boldv+d\boldw]_B \amp (\text{def. of } [\phantom{\boldv}]_B) \\ \amp =(ca_1+db_1,ca_2+db_2,\dots, ca_n+db_n) \\ \amp =c(a_1,a_2,\dots, a_n)+d(b_1,b_2,\dots, b_n)\\ \amp =c[\boldv]_B+d[\boldw]_B\\ \amp =cT(\boldv)+dT(\boldw) \text{.} \end{align*}
    This proves \(T\) is linear.
  2. Clearly, if \(\boldv=\boldw\text{,}\) then \([\boldv]_B=[\boldw]_B\text{.}\) If \(T(\boldv)=T(\boldw)=(c_1,c_2,\dots, c_n)\text{,}\) then by definition of \([\phantom{\boldv}]_B\) we must have
    \begin{equation*} \boldv=c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n==\boldw\text{.} \end{equation*}
  3. Given any \(\boldb=(b_1,b_2,\dots, b_n)\in \R^n\text{,}\) we have \(\boldb=T(\boldv)\text{,}\) where
    \begin{equation*} \boldv=b_1\boldv_1+b_2\boldv_2+\cdots +b_n\boldv_n\text{.} \end{equation*}
    This proves \(\im T=\R^n\text{.}\)
  4. We have
    \begin{align*} \boldv\in \Span S \amp \iff \boldv=\sum_{i=1}^rc_i\boldv_i\\ \amp\iff [\boldv]_{B}=\left[ \sum_{i=1}^rc_i\boldv_i\right] \\ \amp \iff [\boldv]_B=\sum_{i=1}^rc_i[\boldv_i]_B\amp ([\phantom{\boldv}]_B \text{ is linear})\\ \amp \iff [\boldv]_B\in \Span S'\text{.} \end{align*}
    Similarly, we have
    \begin{align*} \sum_{i=1}^rc_i\boldv_i=\boldzero \amp \iff \left[\sum_{i=1}^rc_i\boldv_i\right]_B=[\boldzero]_B \amp (\boldv=\boldw\iff [\boldv]_B=[\boldw]_B) \\ \amp\iff \sum_{i=1}^rc_i[\boldv_i]_B=(0,0,\dots, 0) \amp ([\phantom{\boldv}]_B \text{ is linear}) \text{.} \end{align*}
    From this equivalence we see that there is a nontrivial linear combination of \(S\) yielding \(\boldzero\in V\) if and only if there is a nontrivial linear combination of \(S'\) yielding \(\boldzero\in \R^n\text{.}\) In other words, \(S\) is linearly independent if and only if \(S'\) is linearly independent.

Remark 5.1.13.

Statements (2) and (3) of Theorem 5.1.12 tell us that the coordinate transformation is injective (or one-to-one) and surjective (or onto), respectively. (See Definition 0.2.7).
As an illustration of the coordinate vector mantra, we describe a general method of contracting and extending subsets of a general finite-dimensional vector space \(V\) to bases. The method translates the problem into \(\R^n\) using the coordinate transformation, applies the relevant algorithm available to us for subsets of \(\R^n\text{,}\) and then “lifts” the results back to \(V\) using the coordinate transformation again.

Example 5.1.15.

The set
\begin{equation*} S=\left \{ A_1=\begin{bmatrix}2\amp 1\\ 0\amp -2 \end{bmatrix} , A_2=\begin{bmatrix}1\amp 1\\ 1\amp -1 \end{bmatrix} , A_3=\begin{bmatrix}0\amp 1\\ 2\amp 0 \end{bmatrix} , A_4=\begin{bmatrix}-1\amp 0\\ 1\amp 1 \end{bmatrix} \right\} \end{equation*}
is a subset of the space \(W=\{ A\in M_{22}\colon \tr A=0\}\text{.}\) Let \(W'=\Span S\text{.}\) Contract \(S\) to a basis of \(W'\) and determine whether \(W'=W\text{.}\)
Hint.
Choose an ordered basis \(B\) of \(M_{22}\) and use the coordinate vector map to translate to a question about subspaces of \(\R^4\text{.}\) Answer this question and translate back to \(M_{22}\text{.}\)
Solution.
Let \(B=(E_{11}, E_{12}, E_{21}, E_{22})\) be the standard basis of \(M_{22}\text{.}\) Apply \([\phantom{\boldv}]_B\) to the elements of the given \(S\) to get a corresponding set \(S'\subseteq\R^4\text{:}\)
\begin{equation*} S'=\left\{ [A_1]_B=(2,1,0,-2), [A_2]_B=(1,1,1,-1), [A_3]_B=(0,1,2,0), [A_4]_B=(-1,0,1,1) \right\}\text{.} \end{equation*}
Apply the column space procedure of Procedure 3.8.13 to contract \(S'\) to a basis \(T'\) of \(\Span S'\text{.}\) This produces the subset
\begin{equation*} T'=\{[A_1]_B=(2,1,0,-2), [A_2]_B=(1,1,1,-1)\} \end{equation*}
Translating back to \(V=M_{22}\text{,}\) we conclude that the corresponding set
\begin{equation*} B=\{A_1, A_2\} \end{equation*}
is a basis for \(W'=\Span S\text{.}\) We conclude that \(\dim W'=2 \text{.}\)
Lastly the space \(W\) of all trace-zero matrices is easily seen to have basis
\begin{equation*} \left\{ \begin{amatrix}[rr]1\amp 0\\ 0 \amp -1 \end{amatrix}, \begin{amatrix}[rr]0\amp 1\\ 0\amp 0 \end{amatrix}, \begin{amatrix}[rr]0 \amp 0\\ 1\amp 0 \end{amatrix} \right\}\text{,} \end{equation*}
and hence \(\dim W=3\text{.}\) Since \(\dim W'\lt\dim W\text{,}\) we conclude that \(W'\ne W\text{.}\)

Exercises 5.1.3 Exercises

Coordinate vectors in \(\R^n\).

In each exercise an ordered basis \(B\) is given for \(\R^3\text{.}\) Compute \([\boldx]_B\) for the given \(\boldx\in \R^3\text{.}\)
1.
\(B=\left((1,0,0), (2,2,0), (3,3,3) \right)\text{,}\) \(\boldx=(2,-1,3)\)
2.
\(B=\left((5,-12,3), (1,2,3), (-4,5,6) \right)\text{,}\) \(\boldx=(2,-1,3)\)

Coordinate vectors in \(P_n\).

In each exercise an ordered basis \(B\) is given for \(P_2\text{.}\) Compute \([p]_B\) for the given polynomial \(p\in P_2\text{.}\)
3.
\(B=(1,x,x^2)\text{,}\) \(p(x)=-2x^2+3x-5\)
4.
\(B=(x^2+1, x+1, x^2+x)\text{,}\) \(p(x)=x^2-x+2\)

Coordinate vectors: orthogonal basis.

In each exercise an inner product space \((V,\langle\, \rangle)\) and orthogonal ordered basis is given. Use Theorem 5.1.9 to compute the requested coordinate vector.
5.
\(V=\R^3\) with dot product; \(B=\left((1,1,1),(1,-1,0),(1,1,-2)\right)\text{.}\) Compute \([(-3,2,4)]_B\text{.}\)
6.
\(V=\R^3\) with dot product with weights \(k_1=1, k_2=2, k_3=2\text{;}\) \(B=\left((1,1,1),(2,1,-2),(4,-3,1)\right)\text{.}\) Compute \([(0,1,0)]_B\text{.}\)
7.
\(V=\Span\{\cos x, \cos 2x, \cos 3x\}\subseteq C([0,2\pi])\) with integral inner product \(\langle f, g\rangle=\int_0^{2\pi}f(x)g(x)\, dx\text{;}\) \(B=(\cos x, \cos 2x, \cos 3x)\text{.}\) Compute \([\cos^3 x]_B\text{.}\) (Yes, \(\cos^3(x)\) can indeed be written as a linear combination of \(\cos x, \cos 2x, \cos 3x\text{.}\) In this exercise you will discover what the corresponding identity is using inner products!)

8.

Let \(B=(A_1,A_2,A_3,A_4)\) where
\begin{equation*} A_1=\begin{bmatrix} 1\amp 0\\ 1\amp 0 \end{bmatrix}, A_2=\begin{bmatrix} 1\amp 1\\ 0\amp 0 \end{bmatrix}, A_3=\begin{bmatrix} 1\amp 0\\ 0\amp 1 \end{bmatrix}, A_4=\begin{bmatrix} 0\amp 0\\ 1\amp 0 \end{bmatrix}\text{.} \end{equation*}
You may take for granted that \(B\) is an ordered basis of \(M_{22}\text{.}\)
  1. Compute \(\left [ \begin{bmatrix} 6\amp 2\\ 5\amp 3 \end{bmatrix}\right]_B\text{.}\)
  2. Compute \([A]_B\) for an arbitrary matrix \(\begin{bmatrix} a\amp b\\ c\amp d \end{bmatrix}\in M_{22}\text{.}\)

9.

Let \(S=\{p_1(x)=2x^2+x+1, p_2(x)=x-1, p_3(x)=x^2+1, p_4(x)=x^2-x-1\}\subseteq P_2 \text{.}\)
  1. Use one of the techniques described in Procedure 5.1.14 to contract \(S\) to a basis of \(W=\Span S\text{.}\) To begin, choose your favorite ordered basis of \(P_2\text{.}\)
  2. Use your result in (a) to describe \(W\) is as simple a manner as possible.

10.

Let \(S=\{ p_1=x^3+1, p_2=2x^3+x+1, p_3=3x^3+2x+1, p_4=2x^3+x^2+x+1\}\subseteq P_3\text{.}\)
  1. Use one of the techniques described in Procedure 5.1.14 to contract \(S\) to a basis of \(W=\Span S\text{.}\) To begin, choose your favorite ordered basis of \(P_3\text{.}\)
  2. Using your result in (a) to decide whether \(W=P_3\text{.}\)

11.

Let \(S=\{p_1(x)=x^2+x+1, p_2(x)=3x^2+6x\}\subseteq P_2\text{.}\) Use one of the techniques described in Procedure 5.1.14 to extend \(S\) to a basis of \(P_2\text{.}\)

12.

Let
\begin{equation*} S=\left\{A_1=\begin{amatrix}[rr]1\amp 2\\1\amp 1 \end{amatrix}, \ A_2=\begin{amatrix}[rr] 1\amp 1\\2\amp 1\end{amatrix}, A_3=\begin{amatrix}[rr] -1\amp 1\\ -4\amp -1 \end{amatrix} , A_4=\begin{amatrix}[rr] 0\amp 1\\ 2\amp 0\end{amatrix}\right\}\subseteq M_{22}\text{.} \end{equation*}
  1. Use one of the techniques described in Procedure 5.1.14 to contract \(S\) to a basis of \(W=\Span S\text{.}\)
  2. Show that
    \begin{equation*} W=\{\begin{bmatrix}a\amp b\\ c\amp a\end{bmatrix}\colon a,b,c\in \R\}. \end{equation*}
    Use a dimension argument to make your life easier.

13. Orthonormal coordinate vectors.

Let \((V, \langle\, , \rangle)\) be an inner product space, and suppose \(B=(\boldv_1, \boldv_2, \dots, \boldv_n)\) is an orthonormal ordered basis of \(V\text{.}\)
  1. Prove that
    \begin{equation*} \langle \boldv, \boldw\rangle =[\boldv]_B\cdot [\boldw]_B \end{equation*}
    for all \(\boldv, \boldw\in V\text{.}\) In other words we can compute the inner product of vectors by computing the dot product of their coordinate vectors with respect to the orthonormal basis \(B\text{.}\)
  2. Prove that a set \(S=\{\boldw_1, \boldw_2, \dots, \boldw_r\}\subseteq V\) is orthogonal (resp. orthonormal) with respect to \(\langle\, , \rangle\) if and only if \(S'=\{[\boldw_1]_B, [\boldw_2]_B, \dots, [\boldw_r]_B\subseteq \R^n\) is orthogonal (resp. orthonormal) with respect to the dot product.