Linear Combinations#
Definition: Linear Combination
Let \(\mathbf{v}_1, \cdots, \mathbf{v}_n\) be vectors in a vector space \((V, F, +, \cdot)\).
A linear combination of \(\mathbf{v}_1, \cdots, \mathbf{v}_n\) is any \(\mathbf{v} \in V\) which can be expressed as
for some \(\lambda_1, \dotsc, \lambda_n \in F\).
Example
We can express every \(\mathbf{v} = (v_1, \dotsc, v_n) \in F^n\) as a linear combination
of
Example
The polynomial \(2x^2 + x + 3\) is a linear combination of the polynomials \(x^0, x, x^2\).
Linear Independence#
Definition: Linear Independence
A subset \(S \subseteq V\) of a vector space \(V\) is linearly independent if removing any \(\mathbf{v}\) from \(S\) results in a span different from that of \(S\):
Let \(\mathbf{v}_1, \cdots, \mathbf{v}_n\) be vectors in a vector space \((V, F, +, \cdot)\).
We say that \(\mathbf{v}_1, \dotsc, \mathbf{v}_n\) are linearly independent if
for all \(k \in \{1, \dotsc, n\}\).
Warning
Saying that \(\mathbf{v}_1, \dotsc, \mathbf{v}_n\) are linearly independent is the same as saying that \(\{\mathbf{v}_1, \dotsc, \mathbf{v}_n\}\) linearly independent only when \(\mathbf{v}_1, \dotsc, \mathbf{v}_n\) are all different.
For example, if \(\mathbf{v}_1\) and \(\mathbf{v}_2\) are linearly independent, then \(\mathbf{v}_1, \mathbf{v}_1, \mathbf{v}_2\) are not linearly independent, since \(\operatorname{\mathop{span}}(\mathbf{v}_1, \mathbf{v}_1, \mathbf{v}_2) = \operatorname{\mathop{span}}(\mathbf{v}_1, \mathbf{v}_2)\). However, \(\{\mathbf{v}_1, \mathbf{v}_1, \mathbf{v}_2\}\) is linearly independent because \(\{\mathbf{v}_1, \mathbf{v}_1, \mathbf{v}_2\} = \{\mathbf{v}_1, \mathbf{v}_2\}\) and \(\operatorname{\mathop{span}}(\{\mathbf{v}_1, \mathbf{v}_1, \mathbf{v}_2\}) \ne \operatorname{\mathop{span}}(\{\mathbf{v}_1, \mathbf{v}_1, \mathbf{v}_2\} \setminus \{\mathbf{v}_1\}) = \operatorname{\mathop{span}}(\{\mathbf{v}_2\})\).
Example
The vectors
in \(F^n\) are linearly independent.
Example
For each \(t\) we define the function \(f_t: \mathbb{R} \to \mathbb{R}\) in the following way:
The set \(\{f_t: t \in \mathbb{R}\}\) of all \(f_t\)'s is linearly independent in the space of all real functions.
Theorem: Alternative Definition
Let \(V\) be a vector space.
A subset \(S \subseteq V\) is linearly independent if and only if there is no \(\mathbf{v} \in S\) which can be represented as a linear combination of vectors from \(S \setminus \{\mathbf{v}\}\).
Proof
We need to prove two things:
- (I) If \(S\) is linearly independent, then there is no \(\mathbf{v} \in S\) which can be represented as a linear combination of vectors from \(S \setminus \{\mathbf{v}\}\).
- (II) If there is no \(\mathbf{v} \in S\) which can be represented as a linear combination of vectors from \(S \setminus \{\mathbf{v}\}\), then \(S\) is linearly independent.
Proof of (I):
Suppose there is some \(\mathbf{v} \in S\) which can be expressed as a linear combination of vectors from \(S \setminus \{\mathbf{v}\}\). This means that \(\mathbf{v} \in \mathop{\operatorname{span}} (S \setminus \{\mathbf{v}\})\). But this is a contradiction with the definition of linear independence, since it means that \(\mathop{\operatorname{span}} (S \setminus \{\mathbf{v}\}) = \mathop{\operatorname{span}} (S)\).
Theorem: Alternative Definition
Let \(V\) be a vector space.
A subset \(S \subseteq V\) is linearly independent if and only if
for all pairwise-different \(\mathbf{v}_1, \dotsc, \mathbf{v}_n \in S\).
Proof
We need to prove two things:
- (I) If \(S\) is linearly independent, then \(\lambda_1 \mathbf{v}_1 + \cdots + \lambda_n \mathbf{v}_n = 0 \implies \lambda_1 = \cdots = \lambda_n = 0\) for all pairwise-different \(\mathbf{v}_1, \dotsc, \mathbf{v}_n \in S\).
- (II) If \(\lambda_1 \mathbf{v}_1 + \cdots + \lambda_n \mathbf{v}_n = 0 \implies \lambda_1 = \cdots = \lambda_n = 0\) for all pairwise-different \(\mathbf{v}_1, \dotsc, \mathbf{v}_n \in S\), then \(S\) is linearly independent.
Proof of (I): TODO
Proof of (II):
Suppose that \(S\) is not linearly independent. Then there exists some \(\mathbf{v} \in S\) such that \(\mathop{\operatorname{span}}(S) = \mathop{\operatorname{span}}(S \setminus \{\mathbf{v}\})\). This means that there is some \(N \in \mathbb{N}\), some pairwise different \(\mathbf{v}_1,\dotsc,\mathbf{v}_N \ne \mathbf{v}\) and some non-zero \(\lambda_1, \dotsc, \lambda_N \in F\) such that
This means that
However, this is a contradiction because \(\mathbf{v},\mathbf{v}_1, \dotsc, \mathbf{v}_N\) are all pairwise different but \(-1 \ne 0\).
Theorem: Size Limit for Linearly Independent Sets
The number of elements in any set \(I\) of linearly independent vectors from a finitely generated vector space \((V,F,+,\cdot)\) is always less than or equal to the dimension of \(V\).
Proof
Let \(B = \{\mathbf{b}_1, \cdots, \mathbf{b}_n,\}\) be a Hamel Bases of \(V\) and let \(I = \{\mathbf{v}_1, \cdots, \mathbf{v}_m\}\) be a set of linearly independent vectors.
According to the Hamel Bases there are \(n-m\) vectors in \(B\) which form a basis with the vectors from \(I\). This means that \(n-m\) cannot be negative and thus the proof is complete.
Definition: Maximality
A linearly independent subset \(S \subseteq V\) of a vector space \(V\) is maximal if there is no \(\mathbf{v} \in V \setminus S\) such that the union \(S \cup \{ \mathbf{v} \}\) is still linearly independent.
Tip
This means that no matter how much we try, we cannot find any vector outside \(S\) such that if we add it to \(S\), we would still end up with a linearly independent set.
Linear Dependence#
Definition: Linear Dependence
Let \(\mathbf{v}_1, \cdots, \mathbf{v}_n\) be vectors in some index \((V,F,+,\cdot\)).
We say that \(\mathbf{v}_1, \cdots, \mathbf{v}_n\) are linearly dependent iff they are not linearly independent, i.e. there exist \(c_1, \cdots, c_n\) with at least one \(c_i \ne 0\) such that
Theorem: Linear Dependence \(\implies\) Linear Combination
If \(\{ \mathbf{v}_1, \cdots, \mathbf{v}_n\} \, (n \ge 2)\) are linearly dependent vectors, then there is at least one \(\mathbf{v}_i \in \{ \mathbf{v}_1, \cdots, \mathbf{v}_n\}\) which can be expressed as a linear combination of the rest of the vectors:
Proof
According to the definition of linear dependence, there are coefficients \(h_1, \cdots, h_n \in F\) with at least one \(h_i \ne 0\) such that
Let's move everything except \(h_i \mathbf{v}_i\) to the other side of the equation:
Since \(h_i \ne 0\), we can divide both sides by it:
We have thus obtained \(\mathbf{v}_i\) as a linear combination of the other vectors, where \(c_j = -\frac{h_j}{h_i}\)