Linear Dependency: Max Size of Subset of Vectors

  • MHB
  • Thread starter Yankel
  • Start date
  • Tags
    Linear
In summary, to find the maximum size of a subset of a vector space that is linearly independent, you must check if any of the vectors in the subset are linearly dependent upon each other, and if so, eliminate any vectors that are not scalar multiples of the first vector in the subset.
  • #1
Yankel
395
0
Hello all,

I have this set of vectors:

(1,1) , (-2,-2) , (0,0) , (3,-2)

I need to say if it linearly dependent, and I need to find the maximum size of the subset of this set, which is linearly independent.

What I think, is that as long as (0,0) is there, it must be dependent. In addition, (1,1) and (-2,-2) are also dependent. Thus if I had to guess, I would say the maximum size is 2 vectors, (3,-2) and either (1,1) or (-2,-2). My question is, am I right, or am I missing something ?

Thanks !
 
Physics news on Phys.org
  • #2
Yankel said:
Hello all,

I have this set of vectors:

(1,1) , (-2,-2) , (0,0) , (3,-2)

I need to say if it linearly dependent, and I need to find the maximum size of the subset of this set, which is linearly independent.

What I think, is that as long as (0,0) is there, it must be dependent. In addition, (1,1) and (-2,-2) are also dependent. Thus if I had to guess, I would say the maximum size is 2 vectors, (3,-2) and either (1,1) or (-2,-2). My question is, am I right, or am I missing something ?

Thanks !

Hi Yankel, :)

You are correct. $(0,\,0)$ is indeed linearly dependent upon any of the remaining vectors as it could be obtained by multiplying a vector by $0$; for example $(0,\,0)=0(1,\,1)$, Also $(1,\,1)$ and $(-2,\,-2)$ are linearly dependent since, $(-2,\,-2)=-2(1,\,1)$. Then it could be shown that the sets $\{(1,\,1),\,(3,\,-2)\}$ and $\{(-2,\,-2),\,(3,\,-2)\}$ are linearly independent. In the case of $\{(1,\,1),\,(3,\,-2)\}$, let,

\[\alpha(1,\,1)+\beta(3,\,-2)=0\]

and show that both $\alpha$ and $\beta$ should be equal to zero.
 
  • #3
In a sense, linear dependency is a measure of "spanning redundancy". For example, adding the 0-vector never gives us any new vectors in the span, and we already can realize the 0-vector as a linear combination of any other set $\{v_1,\dots,v_k\}$ as:

$0v_1 + 0v_2 + \cdots + 0v_k$.

in the same way if:

$v_2 = cv_1$ for a non-zero $c$, then we can replace any linear combination containing $v_2$ with $cv_1$.

For example, the linear combination:

$a_1v_1 + a_2v_2 + a_3v_3 + \cdots +a_kv_k$

is equal to:

$(a_1 + ca_2)v_1 + a_3v_3 + \cdots a_kv_k$

We might have just as well eliminated $v_1$ in this case, replacing it with $\dfrac{1}{c}v_2$ in any linear combination.

This situation is a bit more complicated if we have something like:

$v_3 = b_1v_1 + b_2v_2$

as we might decide to keep $\{v_1,v_2\},\{v_1,v_3\}$ or $\{v_2,v_3\}$.

Generally speaking, the more dimensions we have in our space, the more chances we have of the linear dependency relations being "complicated". For $\text{dim}(V) > 4$ I wouldn't trust "elimination by inspection" but form a matrix from the vector-set and compute its rank.

***********

In this particular problem, you have some information to go off of straight off the bat:

The dimension of your vector space is two (because each vector has only two coordinates, and some vectors with non-zero coordinates in each position exist in your set).

So at most, two of your vectors can be linearly independent.

Since (0,0) ALWAYS makes any set you add it to linearly dependent, get rid of it.

Pick anyone of the 3 remaining non-zero vectors. Now we have a linearly independent set of one vector (which spans a one-dimensional subspace of our two-dimensional vector space).

Now pick a 2nd vector...is it a scalar multiple of the first vector (that is: does it lie in the subspace generated by the first vector)? If so, get rid of it, you don't need it.

Otherwise, it is linearly independent from the first vector and you are done.

Repeat this procedure until you have exhausted the set, or obtained two linearly independent vectors (which is the maximum possible).

(If we had MORE dimensions, we would have to check for a third vector, and we would have to check our 3rd choice was not in the subspace spanned by our first two choices).
 

FAQ: Linear Dependency: Max Size of Subset of Vectors

What is linear dependency?

Linear dependency refers to the relationship between a set of vectors in a vector space. It means that one or more vectors in the set can be expressed as a linear combination of the other vectors in the set. In other words, one vector in the set can be written as a scalar multiple of another vector in the set.

What is the max size of a subset of vectors that can be linearly dependent?

The maximum size of a subset of vectors that can be linearly dependent is equal to the dimension of the vector space. For example, in a three-dimensional vector space, the maximum size of a linearly dependent subset is three, as any more vectors would result in a redundant fourth vector.

How do you determine if a set of vectors is linearly dependent?

To determine if a set of vectors is linearly dependent, you can use the determinant method or the rank method. The determinant method involves calculating the determinant of a matrix formed by the vectors, and if the determinant is equal to zero, the vectors are linearly dependent. The rank method involves creating a matrix with the vectors as columns and reducing it to row-echelon form. If the rank of the matrix is less than the number of vectors, then the vectors are linearly dependent.

Can a linearly dependent set of vectors still span a vector space?

Yes, a linearly dependent set of vectors can still span a vector space. However, the span of the set will be smaller than the dimension of the vector space. This means that the set may not be able to create all possible vectors in the vector space, but it can still create a subspace of the vector space.

How can linear dependency be used in practical applications?

Linear dependency has various applications in mathematics, physics, and engineering. For example, it is used in linear regression analysis to determine the relationship between variables. It is also used in solving systems of linear equations and in finding eigenvectors and eigenvalues in linear algebra. In physics, linear dependency is used to determine the forces acting on an object in a system.

Similar threads

Replies
3
Views
1K
Replies
4
Views
2K
Replies
6
Views
1K
Replies
9
Views
2K
Replies
3
Views
2K
Replies
4
Views
2K
Replies
2
Views
2K
Back
Top