Linear combinations and span
Linear combinations are the sum of scaled vectors
At the end of the last lesson, we took of the basis vector and of the basis vector to express the vector as
Hi! I'm krista.
I create online courses to help you rock your math class. Read more.
Notice how the vector is multiplied by a scalar of , and the vector is multiplied by a scalar of . In other words, to express , we’ve only done two operations: 1) we’ve multiplied vectors by scalars, and 2) we’ve added these scaled vectors together.
Any expression like this one, which is just the sum of scaled vectors, is called a linear combination. Linear combinations can sum any number of vectors, not just two. So is a linear combination, is a linear combination, is a linear combination, and so on.
Span of a vector set
The span of a set of vectors is the collection of all vectors which can be represented by some linear combination of the set.
That sounds confusing, but let’s think back to the basis vectors and in . If you choose absolutely any vector, anywhere in , you can get to that vector using a linear combination of and . If I choose , I can get to it with the linear combination , or if I choose , I can get to it with the linear combination . There’s no vector you can find in that you can’t reach with a linear combination of and .
And because you can get to any vector in with a linear combination of and , you can say specifically that and span . If a set of vectors spans a space, it means you can use a linear combination of those vectors to reach any vector in the space.
In the same way, I can get to any vector, anywhere in , using a linear combination of the basis vectors , , and , which means , , and span , the entirety of three-dimensional space.
I could also write these facts as
One other point: the span of the zero vector is always just the origin, so in , in , etc.
Span, and linear independence
So our next step is to be able to determine when a vector set spans a space, and when it doesn’t. In other words, how can we tell when every point in the space is, or is not, reachable by a linear combination of the vector set?
The answer has to do with whether or not the vectors in the set are linearly independent or linearly dependent. We’ll talk about linear (in)dependence in the next lesson, but for now, let’s just make three points:
Any two-dimensional linearly independent vectors will span . The two-dimensional basis vectors and are linearly independent, which is why they span .
Any three-dimensional linearly independent vectors will span . The three-dimensional basis vectors , , and are linearly independent, which is why they span .
Any -dimensional linearly independent vectors will span . The -dimensional basis vectors are linearly independent, which is why they span .
So when is a set of vectors linearly dependent, such that they won’t span the vector space , , or ?
First, we should say that we can never span with fewer than vectors. In other words, we can’t span with one or fewer vectors, we can’t span with two or fewer vectors, and we can’t span with or fewer vectors.
Second, assuming we have enough vectors to span the space, generally speaking, those vectors need to be “different enough” from each other that they can cover the whole vector space. It’s actually easier to think about when the vectors won’t be “different enough” to span the vector space:
When two-dimensional vectors lie along the same line (or along parallel lines), they’re called collinear, they’re linearly dependent, and they won’t span .
When three-dimensional vectors lie in the same plane, they’re called coplanar, they’re linearly dependent, and they won’t span .
When -dimensional vectors lie in the same -dimensional space, they’re linearly dependent, and they won’t span .
Let’s hold off until the next section on more detail about linear dependence and independence, and turn to an example.
The span of a vector set is the complete set of all possible linear combinations
Take the course
Want to learn more about Linear Algebra? I have a step-by-step course for that. :)
You can use a linear combination of basis vectors in a space to get to any vector in that space
Example
Prove that you can use a linear combination of the basis vectors and to get any vector in .
We can set up a vector equation, then write the basis vectors as column vectors.
This matrix equation can be rewritten as a system of equations:
Simplifying the system leaves us with
Any n n-dimensional linearly independent vectors will span R^n. The n-dimensional basis vectors are linearly independent, which is why they span R^n.
So what have we shown? We realize that this system means we could pick any vector in , and we’d get and , which means our linear combination will simply be a number of ’s, and a number of ’s.
So if, for example, the vector we chose in was , then the linear combination of the basis vectors is
Which means we would need to use of the vectors and of the vectors in order to reach from the origin.