# Linear Transformations

A **linear transformation** is a function from one vector space to another that respects the underlying (linear) structure of each vector space. A linear transformation is also known as a linear operator or map. The range of the transformation may be the same as the domain, and when that happens, the transformation is known as an endomorphism or, if invertible, an automorphism. The two vector spaces must have the same underlying field.

The defining characteristic of a linear transformation $T: V \to W$ is that, for any vectors $v_1$ and $v_2$ in $V$ and scalars $a$ and $b$ of the underlying field,

$T(av_1 + bv_2) = aT(v_1) + bT(v_2).$

Linear transformations are useful because they preserve the structure of a vector space. So, many qualitative assessments of a vector space that is the domain of a linear transformation may, under certain conditions, automatically hold in the image of the linear transformation. For instance, the structure immediately gives that the kernel and image are both subspaces (not just subsets) of the range of the linear transformation.

Most linear functions can probably be seen as linear transformations in the proper setting. Transformations in the change of basis formulas are linear, and most geometric operations, including rotations, reflections, and contractions/dilations, are linear transformations. Even more powerfully, linear algebra techniques could apply to certain very non-linear functions through either approximation by linear functions or reinterpretation as linear functions in unusual vector spaces. A comprehensive, grounded understanding of linear transformations reveals many connections between areas and objects of mathematics.

A common transformation in Euclidean geometry is rotation in a plane, about the origin. By considering Euclidean points as vectors in the vector space $\mathbb{R}^2$, rotations can be viewed in a linear algebraic sense. A rotation of $v$ counterclockwise by angle $\theta$ is given by

$\text{Rotate}(v) = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} v.$

The linear transformation $\text{Rotate}$ goes from $\mathbb{R}^2$ to $\mathbb{R}^2$ and is given by the matrix shown above. Because this matrix is invertible for any value $\theta$, it follows that this linear transformation is in fact an automorphism. Since rotations can be "undone" by rotating in the opposite direction, this makes sense.

#### Contents

## Types of Linear Transformations

Linear transformations are most commonly written in terms of matrix multiplication. A transformation $T: V \to W$ from $m$-dimensional vector space $V$ to $n$-dimensional vector space $W$ is given by an $n \times m$ matrix $M$. Note, however, that this requires *choosing* a basis for $V$ and a basis for $W$, while the linear transformation exists independent of basis. (That is, it could be expressed as a matrix for any selection of bases.)

The linear transformation from $\mathbb{R}^3$ to $\mathbb{R}^2$ defined by $T(x,\,y,\,z) = (x - y,\, y - z)$ is given by the matrix

$M = \begin{pmatrix} 1 & -1 & 0 \\ 0 & 1 & -1 \end{pmatrix}.$

So, $T$ can also be defined for vectors $v = (v_1, \, v_2, \, v_3)$ by the matrix product

$T(v) = \begin{pmatrix} 1 & -1 & 0 \\ 0 & 1 & -1 \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \\ v_3 \end{pmatrix}.$

Note that the dimension of the initial vector space is the number of

columnsin the matrix, while the dimension of the target vector space is the number ofrowsin the matrix.

Linear transformations also exist in infinite-dimensional vector spaces, and some of them can also be written as matrices, using the slight abuse of notation known as infinite matrices. However, the concept of linear transformations exists independent of matrices; matrices simply provide a nice framework for finite computations.

A linear transformation is surjective if every vector in its range is in its image. Equivalently, at least one $n \times n$ minor of the $n \times m$ matrix is invertible. It is injective if every vector in its image is the image of only one vector in its domain. Equivalently, at least one $m \times m$ minor of the $n \times m$ matrix is invertible.

Is the linear transformation $T(x,\,y,\,z) = (x - y,\, y - z)$, from $\mathbb{R}^3$ to $\mathbb{R}^2$, injective? Is it surjective?

For a vector $v = (v_1,\,v_2,\,v_3)$, this can be written as

$T(v) = M v = \begin{pmatrix} 1 & -1 & 0 \\ 0 & 1 & -1 \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \\ v_3 \end{pmatrix}.$

$M$ is a $2 \times 3$ matrix, so

it is surjectivebecause the minor $\begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix}$ has determinant $1$ and therefore is invertible (since the determinant is nonzero). However, there are no $3 \times 3$ minors, soit is not injective. $_\square$

A linear transformation $T: V \to W$ between two vector spaces of equal dimension (finite or infinite) is **invertible** if there exists a linear transformation $T^{-1}$ such that $T\big(T^{-1}(v)\big) = v$ and $T^{-1}\big(T(v)\big) = v$ for any vector $v \in V$. For finite dimensional vector spaces, a linear transformation is invertible if and only if its matrix is invertible.

Note that a linear transformation

mustbe between vector spaces of equal dimension in order to be invertible. To see why, consider the linear transformation $T(x,\,y,\,z) = (x - y,\, y - z)$ from $\mathbb{R}^3$ to $\mathbb{R}^2$. This linear transformation has aright inverse$S(x,\,y) = (x + y,\, y, \, 0).$ That is, $T\big(S(x,\,y)\big) = T(x + y,\,y,\,0) = (x,\,y)$ for all $(x,\,y) \in \mathbb{R}^2$. However, it has noleft inverse, since there is no map $R: \mathbb{R}^2 \to \mathbb{R}^3$ such that $R\big(T(x,\,y,\,z)\big) = (x,\,y,\,z)$ for all $(x,\,y,\,z) \in \mathbb{R}^3$. This follows from facts about the rank of $T$.

Which of the following is/are invertible linear transformations?

- $T_1: \mathbb{R}^3 \to \mathbb{R}^3$ is the transformation that takes $(x, \, y, \, z)$ to $(x - y,\, y - z,\, z - x)$.
- $T_2: \mathbb{C}^2 \to \mathbb{C}^2$ is the transformation that takes $(w, \, z)$ to $\big(\text{Re}(w) + \text{Im}(z) i, \, \text{Re}(z) + \text{Im}(w) i\big)$.
- $V$ is the vector space of all sequences of real numbers (vector addition creates a new sequence from the component-wise sums of the previous two). $T_3: V \to V$ is the "right shift" transformation that takes a sequence $\{a_n\}_{n \ge 0}$ and returns a sequence $\{b_n\}_{n \ge 0}$ satisfying $b_0 = 0$ and $b_n = a_{n - 1}$ for all $n \ge 1$.

## Examples of Linear Transformations

A linear transformation can take many forms, depending on the vector space in question.

Consider the vector space $\mathbb{R}_{\le n}[x]$ of polynomials of degree at most $n$. By noting there are $n+1$ coefficients in any such polynomial, in some sense the equality $\mathbb{R}_{\le n}[x] \sim \mathbb{R}^{n+1}$ holds. However, there is a natural linear transformation $\frac{d}{dx}$ on the vector space $\mathbb{R}_{\le n}[x]$ that satisfies

$\frac{d}{dx} \left( a_0 + a_1x + a_2x^2 + \dots + a_nx^n \right) = a_1 + 2a_2x + \dots + na_nx^{n-1}.$

## Effects on the Basis

A linear transformation from vector space $V$ to vector space $W$ is determined entirely by the image of basis vectors of $V$. This allows for more concise representations of linear transformations, and it provides a linear algebraic explanation for the relation between linear transformations and matrices (the matrix's columns and rows represent bases).

Let $V$ and $W$ be vector spaces over the same field, and let $\mathcal{B} \subset V$ be a set of basis vectors of $V$. Then, for any function $f: \mathcal{B} \to W$, there is a unique linear transformation $T: V \to W$ such that $T(u) = f(u)$ for each $u \in \mathcal{B}$. Furthermore, the span of $f(\mathcal{B})$ is equal to the image of $T$.

In other words, a linear transformation can be created from any function (no matter how "non-linear" in appearance) on the basis vectors. The behavior of basis vectors entirely determines the linear transformation.

The proof follows from the fact that any element of $V$ is expressible as a linear combination of basis elements and that there is only one possible such linear combination.

With this mentality, change of basis can be used to rewrite the matrix for a linear transformation in terms of any basis. This is particularly helpful for endomorphisms (linear transformations from a vector space to itself).

However, the linear transformation itself remains unchanged, independent of basis choice. That is, no matter what the choice of basis, all the qualities of a linear transformation remain unchanged: injectivity, surjectivity, invertibility, diagonalizability, etc.

We can also establish a bijection between the linear transformations on $n$-dimensional space $V$ to $m$-dimensional space $W$. Let $T$ be a such transformation, and fix the bases $\mathfrak{A} = \{ e_i \}_{i = 1, ..., n}$ for $V$, and $\mathfrak{B}= \{ e'_i \}_{i = 1, ..., m }$ for $W$. Then we can describe the effect of $T$ on each basis vector $e_i$ as follows: $T(e_j ) = \sum_i a_{ij} e'_i, j = 1, 2, ..., n$. Define the matrix $A = A(i, j) = a_{ij}, 1 \leqslant i \leqslant m, 1 \leqslant j \leqslant n$ to be the matrix of transformation of $T$ in the bases $\mathfrak{A}, \mathfrak{B}$. The $i$th column of $A$ describes the effect of $T$ on the $i^\text{th}$ basis vector of $V$, and from the previous ideas, we can now describe using coordinates the effect of $T$ on any vector in $V$ via matrix multiplication. Conversely, selecting an $m \times n$ matrix and multiplying on the right-hand side linear combinations of the vectors in $\mathfrak{A}$ also defines a linear transformation from an $n$-dimensional space into an $m$-dimensional space by definition of matrix multiplication. Since the linear transformations from $V$ to $W$, the set of which is denoted $\mathcal{L}(V, W)$, is itself a vector space, when bases are fixed for such transformations, a bijection is established therefrom into the set of all $m \times n$ matrices.

Let $T: V \to V$ be a linear transformation in a finite-dimensional of a vector space. Then, for some choice of basis $\mathcal{B}$ and matrix $M_\mathcal{B}$, $T(v) = M_\mathcal{B}v$. Which properties of the matrix $M_\mathcal{B}$ remain unchanged regardless of basis $\mathcal{B}?$

## See Also

**Cite as:**Linear Transformations.

*Brilliant.org*. Retrieved from https://brilliant.org/wiki/linear-transformations/