Archive for the ‘linear algebra’ Tag

Linear Transformations, Rank-Nullity, and Matrix Representations (Part 1)

One of the most fundamental theorems in Linear Algebra is the Rank-Nullity, or Dimension, Theorem. This post and the next (in an attempt to study for my preliminary exams) will discuss this theorem, along with linear transformations and their relationship to matrices. In what follows, we follow Hoffman and Kunze by letting greek letters be vectors and roman lowercase letters be scalars.

We begin with a few preliminary remarks.

Definition 1 Let {V} and {W} be vector spaces over the field {\mathcal{F}}. A linear transformation {T:V\rightarrow W} is a mapping such that

\displaystyle  T[c\alpha+\beta] = cT[\alpha]+T[\beta]\notag \ \ \ \ \ (1)

for all {\alpha,\beta\in V} and {c\in\mathcal{F}}.

Linear Transformations are wildly important in all of mathematics. In {\mathbb{R}}, for example, any function which defines a line through the origin is a linear transformation. Why only lines that pass through the origin, rather than any line? Say there were a linear transformation {T:\mathbb{R}\rightarrow\mathbb{R}} such that {T[0]=\beta}. Then, {T[\alpha-\alpha]=\beta}, so by linearity, {T[\alpha]=T[\alpha]+\beta}. This is only true if {\beta=0}. Indeed, for any linear transformation {T:V\rightarrow W}, {T[0_V]=0_W}, where {0_V} and {0_W} are the zero-elements of {V} and {W}, respectively. A more important example of a linear transformation is the derivative function from the space of differentiable functions to itself. Finally, any {m\times n} matrix is a linear transformation from {\mathbb{R}^n} to {\mathbb{R}^m}. We’ll see later that any linear transformation can be written as such a matrix.

Two important properties of any linear transformation are its rank and null space (or kernel).

Definition 2 Let {V} and {W} be vector spaces over a field {\mathcal{F}} with {V} finite-dimensional and let {T:V\rightarrow W} be a linear transformation. Then,

  1. The range of {T}, denoted range{(T)}, is the set {\{\beta\in W : T[\alpha]=\beta \text{ for some } \alpha \in V\}}. The rank of {T} is dimension of range{(T)}.
  2. The null space of {T}, denoted null{(T)}, is the set {\{\alpha\in V: T[\alpha]=0\}}. The nullity of {T} is the dimension of null{(T)}.

Notice that by naming the their dimensions, we’ve implicitly assumed that the range and nullspace of a linear transformation are themselves vector spaces. This is not too hard to justify, so we’ll do it here.

Theorem 3 Let {V} and {W} be vector spaces over the filed {\mathcal{F}} and let {T:V\rightarrow W} be a linear transformation. Then, range{(T)} is a subspace of {W} and null{(T)} is a subspace of {V}.

Proof: To show our result, we use the subspace test. So, we will show that range{(T)} and null{(T)} are nonempty sets which are closed under vector addition and scalar multiplication.

First recall from above that {T[0_V]=0_W}. This shows that both the range and nullspace of {T} are never empty.

Now, we want to show that if {\beta_1,\beta_2\in} range{(T)}, then {c\beta_1+\beta_2\in} range{(T)} as well. So, assume {T[\alpha_1]=\beta_1} and {T[\alpha_2]=\beta_2} so that {\beta_1,\beta_2\in} range{(T)}. Then, by linearity, {T[c\alpha_1+\alpha_2=c\beta_1+\beta_2}. This means, however, that there is an {\alpha\in V} such that {T[\alpha]=c\beta_1+\beta_2} whenever {\beta_1} and {\beta_2} are in range{(T)} – i.e., {c\beta_1+\beta_2} is in the range of {T}. This gives us that range{(T)} is a subspace of {W}.

We follow the same process for the nullspace. So, assume {\alpha_1,\alpha_2\in} null{(T)}. Then, {T[c\alpha_1+\alpha_2] = cT[\alpha_1]+T[\alpha_2] = c(0)+(0)=0.} So, {c\alpha_1+\alpha_2\in} null{(T)} whenver {\alpha_1} and {\alpha_2} are. That is, null{(T)} is a subspace of {V}. \Box

With all of this in hand, we’re finally ready to state and prove the Rank-Nullity theorem for transformations.

Theorem 4 (Rank-Nullity) Let {V} and {W} be vector spaces over the field {\mathcal{F}}, {V} finite-dimensional, and let {T:V\rightarrow W} be a linear transformation. Then,

\displaystyle  \text{rank}(T) + \text{nullity}(T) = \text{dim}(V). \ \ \ \ \ (2)

Proof: Our proof will proceed as follows: first, we’ll assume a basis for null{(T)}. We know this exists, since {V} is finite-dimensional and null{(T)} is a subspace of {V}. From this and the basis for {V}, we’ll construct a spanning set for range{(T)} and show that it’s linearly independent.
Assume that dim{(V)=n}. Then, assume {\{\alpha_1,\ldots,\alpha_k\}} is a basis for null{(T)} (certainly, {k\leq n}). We contend that every non-empty linearly indepedent set of vectors in a finite dimensional vector space is part of a basis for that vector space. Why? Assume not. If only one element, say {x}, from this linearly independent set, say {S}, is absent from the basis, then either {x} would be linearly independent from the basis, which would be a contradiction, or there would be an element of the basis, say {y} linearly dependent on {x}. If this were the case, however, {x} could take the place of {y} in the basis. This argument can be extended to more elements of {S} until we have shown that no element of {S} can be in the basis. This is clearly absurd. So, since {\{\alpha_1,\ldots,\alpha_k\}} are a linearly independent subset of {V}, we can construct a basis for {V} as {\{\alpha_1,\ldots,\alpha_k,\alpha_{k+1},\ldots,\alpha_n\}}.
Now, consider the set {\{T[\alpha_1],\ldots,T[\alpha_n]\}}. This certainly spans range{(T)}. Moreover, since {\alpha_1,\ldots,\alpha_k} are in the nullspace of {T}, {T[\alpha_i]=0} for {i=1,\ldots,k}. We have, then, that {\{T[\alpha_{k+1}],\ldots,T[\alpha_n]\}} is a spanning set of range{(T)} and want to show that it’s linearly independent. So consider the linear combination

\displaystyle  \sum_{i=k+1}^n c_i T[\alpha_i] = 0 \ \ \ \ \ (3)

for scalars {c_i}. We want to show that {c_i=0} for all {i}. Since {T} is linear, (3) is equivalent to

\displaystyle  T[\sum_{i=k+1}^n c_i\alpha_i] = 0. \ \ \ \ \ (4)

Thus, {\alpha=\sum_{i=k+1}^n c_i\alpha_i} is an element of the nullspace of {T}. Since {\{\alpha_1,\ldots,\alpha_k\}} is a basis for the nullspace, we know that

\displaystyle  \sum_{i=1}^k b_i \alpha_i = \alpha = \sum_{k=i+1}^n c_i\alpha_i. \ \ \ \ \ (5)

The {\alpha_i} form a linearly independent set, however, so {b_i=c_i=0} for all {i}. Thus, we have that {\{T[\alpha_{k+1}],\ldots,T[\alpha_n]\}} is a linearly indepenent set and so a basis for range{(T)}. Therefore,

\displaystyle  \text{rank}(T) + \text{nullity}(T) = (n-k) + k = n = \text{dim}(V), \ \ \ \ \ (6)

as desired. \Box

In introductory Linear Algebra classes and texts, the Rank-Nullity Theorem is usually written in terms of solvability of a matrix. We in the next post, we discuss the representation of transformations by matrices. We will see that this can always be done. Certainly, it’s not hard to see that there are times when it is easier to manipulate matrices than it is linear transformations, and vise versa. Indeed, in practice, to quote Hoffman and Kunze, “We may do this when it is convenient, and when it is not convenient we shall not.” The following post will discuss how to write transformations in terms of matrices as well as how to write the Rank-Nullity Theorem in terms of matrices.