Linear Algebra

Linear Independence

In the previous quiz, we explored the concept of sets of vectors spanning a vector space, meaning that every vector in the vector space can be written as a linear combination of the vectors in the span.

As we saw, there are multiple ways to span a vector space--for instance, both \(\left\{\begin{pmatrix}1\\0\end{pmatrix}, \begin{pmatrix}0\\1\end{pmatrix}\right\}\) and \(\left\{\begin{pmatrix}4\\2\end{pmatrix}, \begin{pmatrix}2\\4\end{pmatrix}\right\}\) span the vector space \(\mathbb{R}^2\). Additionally, we saw that sometimes vectors in the span are “redundant,” meaning that the set would still span the vector space if it were removed.

In this quiz, we explore this idea further, looking at “minimal” spanning sets. As we’ll see, this leads to a number of related concepts that formalize some intuition we’ve previously seen.

Linear Independence

Given a spanning set, a vector is “redundant” if the set would still span the vector space after removing the vector. Suppose we have the following set: \[\left\{\begin{pmatrix}1\\2\\3\end{pmatrix}, \begin{pmatrix}1\\3\\5\end{pmatrix}, \begin{pmatrix}2\\5\\8\end{pmatrix}\right\}.\] This spans a vector space \(V\). Which of these vectors is redundant?

               

Linear Independence

Consider the following spanning set: \[\left\{\begin{pmatrix}1\\2\\3\end{pmatrix}, \begin{pmatrix}1\\3\\4\end{pmatrix}, \begin{pmatrix}3\\8\\11\end{pmatrix}\right\}.\] As before, each of these vectors is “redundant.” Instead of finding three different equations to show this, we can more compactly write \[a\begin{pmatrix}1\\2\\3\end{pmatrix} + b\begin{pmatrix}1\\3\\4\end{pmatrix} + c\begin{pmatrix}3\\8\\11\end{pmatrix} = 0.\] If we can find such constants \(a,b,c,\) which of these sets of constants works for this equation?

               

Linear Independence

The last few problems motivate a formal definition of “redundancy”: a set of vectors \(\{v_1, \ldots, v_n\}\) is called linearly dependent if there exist constants \(c_1, \ldots, c_n\), not all 0, such that \[c_1v_1 + c_2v_2 + \cdots + c_nv_n = 0\] If no such constants exist, the vectors are called linearly independent.

For example, as we saw in the previous problem, the vectors \[\begin{pmatrix}1\\2\\3\end{pmatrix}, \begin{pmatrix}1\\3\\4\end{pmatrix}, \begin{pmatrix}3\\8\\11\end{pmatrix}\] are linearly dependent. However, are the vectors \[\begin{pmatrix}0\\1\end{pmatrix}, \begin{pmatrix}1\\0\end{pmatrix}\] linearly independent? Why?

Linear Independence

Suppose we have the equation \[x + 2y = 3.\] As we’ve seen before, this corresponds to the row vector \(\begin{pmatrix}1&2&3\end{pmatrix}\). Clearly, there are an infinite number of solutions to this simple equation. Suppose we did one of the following:

1) Added the row vector \(\begin{pmatrix}2&6&4\end{pmatrix}\) \((\)i.e. the equation \(2x + 6y = 4).\)

2) Added the row vector \(\begin{pmatrix}2&4&6\end{pmatrix}\) \((\)i.e. the equation \(2x + 4y = 6).\)

What would happen?

A) In both cases, the system would have a unique solution.

B) In case 1, the system would have a unique solution, and in case 2, the system would still have infinite solutions.

C) In case 1, the system would still have infinite solutions, and in case 2, the system would have a unique solution.

D) In both cases, the system would still have infinite solutions.

Linear Independence

Now let’s return to the original point of developing all these tools: solving systems of linear equations. Let’s look at a simple 3-variable system: \[ \begin{align*} x + 2y + 3z &= 5 \\ 2x - y + z &= 4 \\ ax + y + 9z &= 22. \end{align*} \] Which value of \(a\) would cause this system to not have a unique solution?

Linear Independence

Armed with these ideas, let’s revisit an old topic: degrees of freedom. Previously, we had to rely on the intuitive notion of “number of variables that could be anything,” e.g. the equation \(x + y + z = 6\) has 2 degrees of freedom \((\)since \(x\) and \(y\) “could be anything,” but \(z\) depends on those two\().\)

Now we can formalize this idea. The number of degrees of freedom is the number of variables, minus one for each linearly independent equation. A set of equations \( e_{1} , \dots , e_{m} \) is linearly independent if and only if the set cannot be expressed as \( a_{1} e_{1} + \dots + a_{m} e_{m} = 0 \) with constants \(a_{1} , \dots , a_{n} \) which are not all 0. This makes sense with our intuitive notion from before: with no equations, obviously each variable could be anything, and each equation “usually” reduces the number of degrees of freedom by 1.

Consider the system \[ \begin{align*} x + y + 2z &= 4 \\ 2x + y + z &= 4 \\ 5x + 3y + 4z &= 12. \end{align*} \] How many degrees of freedom does this system have?

Linear Independence

In this chapter, we expanded on the idea of spanning sets, focusing on minimal spanning sets and how to find them. This leads naturally to the idea of linear independence, which in turn naturally leads to the idea of degrees of freedom. As we saw, adding equations (i.e. constraints) “usually” reduces the number of degrees of freedom by 1, but if the equation is “redundant”--or linearly dependent with the others--the number of degrees of freedom stays the same.

In the next chapter, we’ll take this idea further, exploring finding minimal spanning sets that span a given vector space.

×

Problem Loading...

Note Loading...

Set Loading...