Towards the end of the previous chapter, we began investigating the matrix equation \[Ax = b\] and saw how important it was to solving systems of linear equations, particularly via Gaussian elimination. We also briefly noticed that the \(b = 0\) case is of particular importance, which we’ll explore in much more detail in this chapter.

In the previous problem, we saw that solutions to the matrix equation \(Ax = 0\) are **closed under addition**, meaning that if we add two solutions together, we get ourselves a new solution. We also saw that the solutions are **closed under (scalar) multiplication**, meaning that any multiple of a solution is still a solution.

Let’s see when and how this can be applied elsewhere. Suppose we have the matrix equation \[\begin{pmatrix}1&3&2\\-1&-2&4\end{pmatrix}x = \begin{pmatrix}13\\5\end{pmatrix}.\] We can find that \(x = \begin{pmatrix}-9\\6\\2\end{pmatrix}\) is a solution, as is \(x = \begin{pmatrix}7\\0\\3\end{pmatrix}\). Which of the following is also a solution?

**magic square** is a 3x3 grid of numbers so that every row, column, and diagonal sum to the same value. For instance, the classical example is
\[
\begin{pmatrix}
2 & 7 & 6 \\
9 & 5 & 1 \\
4 & 3 & 8
\end{pmatrix}.
\]
With this in mind, which of the following is also a magic square using the numbers \(3, 5, 7, 9, 11, 13, 15, 17\), and \(19?\)

By now, we’ve seen a lot of different examples where solutions are closed under addition and scalar multiplication. These are hardly all the possible examples! For instance, this concept appears often in physics, engineering signals, and differential equations. Thus it is important to have a more formal definition of this phenomenon.

Formally, a **vector space** \(V\) is a set of objects, called **vectors**. It is important to note that we use a much broader definition of the word “vector” here than you might be used to. For us, a vector just means an element of a vector space, such as the matrices, functions, and magic squares we’ve seen above.

Of course, a set isn’t terribly interesting by itself--we also need a way to add vectors, and a way to multiply vectors by constants (or **scalars**). For instance, for matrices we define addition and scalar multiplication element-wise. These definitions need to satisfy several conditions to make sure they still “make sense”:

- Associativity and commutativity still hold, meaning \((u + v) + w = u + (v + w)\) and \(u + v = v + u\) for all \(u,v,w \in V\).
- The same is true for scalar multiplication, i.e. \(c_1(c_2v) = (c_1c_2)v\) for constants \(c_1, c_2\).
- The distributive law holds: \(c(u + v) = cu + cv\) and \((c_1 + c_2)v = c_1v + c_2v\) for constants \(c, c_1, c_2\).

Finally, there are a couple of conditions that make a vector space a vector space:

- There is a “zero vector” in \(V\), appropriately called “0”, so that \(v + 0 = v\) for all \(v\) in \(V\).
- If \(v\) is a vector in \(V\), then there is a vector \(-v \in V\) so that \(v + (-v) = 0\).
- If \(u\) and \(v\) are vectors in \(V\), then \(u + v\) is also in \(V\).
- If \(u\) is a vector in \(V\), then \(cu\) is also in \(V\) for any constant \(c\).

Most of these axioms are just making sure operations “work normally,” in a similar way to numbers. The last two are the real heroes here, distinguishing a vector space from just a boring set.

This rigorous formulation might look like overkill to describe a relatively intuitive concept, but as we’ll see in the next question, it’s not always obvious when something is a vector space and when it isn’t, so having a nice checklist to work through is very useful.

One quick note: everywhere we say “constants” above, we almost always mean real numbers. Sometimes we will want to allow them to be other things, like complex numbers, in which case we specifically refer to the space as a “complex vector space” (and the vectors are also complex valued).

Which of the following is a vector space?

×

Problem Loading...

Note Loading...

Set Loading...