Skip to main content
Logo image

Section 3.9 Isomorphisms

In this section we utilize bases, dimension, and the rank-nullity theorem to investigate properties of linear transformations. The main focus will be the notion of an isomorphism, which is simply a linear transformation that is invertible when considered as a function. We begin, however, with an enlightening discussion relating linear transformations and bases.

Subsection 3.9.1 Isomorphisms

We spoke of the coordinate vector map as a means of “translating” questions about an abstract vector space \(V\) to equivalent ones about the more familiar vector space \(\R^n\text{.}\) Properties (1)-(3) of Theorem 5.1.12 are what guarantee that nothing is lost in this translation. Axiomitizing these properties, we obtain an important family of linear transformations called isomorphisms. The word “isomorphism” is derived from the Greek terms iso, meaning “same”, and morphe, meaning “form”. As we will see, isomorphic vector spaces \(V\) and \(W\) are essentially the same creature, at least as far as linear algebraic properties are concerned. Furthermore, an isomorphism \(T\colon V\rightarrow W\) provides a one-to-one correspondence between them: a dictionary that allows us to translate statements about \(V\) to statements about \(W\text{,}\) and vice versa.
Before getting to the definition, recall that by definition a function \(f\colon X\rightarrow Y\) is bijective if it is injective and surjective (0.2.7)

Definition 3.9.1. Isomorphism.

Let \(V\) and \(W\) be vector spaces. An isomorphism from \(V\) to \(W\) is a bijective linear transformation \(T\colon V\rightarrow W\text{.}\) Vector spaces \(V\) and \(W\) are isomorphic if there is an isomorphism from \(V\) to \(W\text{.}\)

Remark 3.9.2. Proving \(T\) is an isomorphism.

According to Definition 3.9.1, to prove a function \(T\colon V\rightarrow W\) is an isomorphism, we must show that
  1. \(T\) is linear, and
  2. \(T\) is invertible.
Since being invertible is equivalent to being bijective, there are two main approaches to proving that (ii) holds for a linear transformation \(T\colon V\rightarrow W\text{:}\)
  1. we can show directly that \(T\) is invertible by providing an inverse \(T^{-1}\colon W\rightarrow V\text{;}\)
  2. we can show that \(T\) is bijective (i.e., injective and surjective).
Which approach, (a) or (b), is more convenient depends on the linear transformation \(T\) in question.

Remark 3.9.3. Inverse of isomorphism is an isomorphism.

Let \(T\colon V\rightarrow W\) be an isomorphism. Since \(T\) is invertible, there is an inverse function \(T^{-1}\colon W\rightarrow V\text{.}\) Not surprisingly, \(T^{-1}\) is itself a linear transformation, though of course this requires proof. (See [provisional cross-reference: ex_isomorphism_inverse].) Since \(T^{-1}\) is also invertible (\(T\) is its inverse), it follows that \(T^{-1}\) is itself an isomorphism.
We mentioned in the introduction that two isomorphic vector spaces are, for all linear algebraic intents and purposes, essentially the same thing. The next theorem provides some evidence for this claim. It also illustrates how a given isomorphism \(T\colon V\rightarrow W\) can translate back and forth between two isomorphic vector spaces. Recall (Definition 0.2.6) that for a subset \(S\subseteq V\text{,}\) the image \(T(S)\) of \(S\) under \(T\) is the set
\begin{equation*} T(S)=\{\boldw\in W\colon \boldw=T(\boldv) \text{ for some } \boldv\in S\}=\{T(\boldv)\colon \boldv\in S\}\text{.} \end{equation*}
The following omnibus result is useful for deciding whether a linear transformation is an isomorphism, and lists a few of the properties of a vector space that are preserved by isomorphisms: namely, dimension, span, and linear independence.
  1. Assume \(T\) is injective. Then
    \begin{align*} T(\boldv)=\boldzero_W\amp \implies T(\boldv)=T(\boldzero_V) \\ \amp\implies \boldv=\boldzero_V \text{.} \end{align*}
    It follows that \(\NS T=\{\boldzero_V\}\text{.}\)
    Now assume \(\NS T=\{\boldzero_V\}\text{.}\) Then
    \begin{align*} T(\boldv)=T(\boldv') \amp\implies T(\boldv)-T(\boldv')=\boldzero_W \\ \amp\implies T(\boldv-\boldv')=\boldzero_W \\ \amp \implies \boldv-\boldv'\in \NS T=\{\boldzero_V\}\\ \amp\implies \boldv-\boldv'=\boldzero_V \\ \amp \implies \boldv=\boldv'\text{.} \end{align*}
    Thus \(T\) is injective.