Dimension statement about (finite-dimensional) subspaces

In summary, the author is explaining how to prove that a set of vectors is linearly independent. They begin by defining a basis for the space and show that this basis is linearly independent. They then ask the reader to forget about the space of three dimensions and focus on the two-dimensional space. They show that a vector is in this space if and only if it is equal to the sum of the vectors in the two-dimensional space.
  • #1
JD_PM
1,131
158
Homework Statement
True or false question? (Prove or give a counterexample)

Let ##V## be a real finite vector space and ##U_1, U_2## and ##U_3## be subspaces (of ##V##) with ##U_1 \cap U_2 = \{0\}##. Then the following statement holds

$$\dim (U_1 \cap U_3) + \dim (U_2 \cap U_3) \leq \dim (U_3)$$
Relevant Equations
N/A
My intuition tells me this is a true statements so let's try to prove it.

The dimension is defined as the number of elements of a basis. Hence, we can work in terms of basis to prove the statement.

Given that ##U_3## appears on both sides of the inequality, let's get a basis for it. How? Let's suppose that ##\{u_1, \dots, u_m \}## is a basis for ##U_1 \cap U_3##. By definition of intersection, ##\{u_1, \dots, u_m \} \in U_3##. The following is a well-known theorem in linear algebra: given a linearly independent list of vectors in a finite dimensional vector space (another one is that given a finite-dimensional vector space, any subspace of it is also finite dimensional), it is always possible to extend it to a basis of the vector space. Hence, a basis for ##U_3## is

$$\beta_{U_3} = \{u_1, \dots, u_m, w_1, \dots, w_j \}$$

But I do not really see how can we conclude the proof. I guess we still need to argue/prove why ##\{w_1, \dots, w_j \}## is a basis for ##U_2 \cap U_3##

Your guidance is appreciated, thanks! :biggrin:
 
Physics news on Phys.org
  • #2
Too complicated! Start with a basis for ##U_3##.
 
  • #3
I would choose a basis for ##U_1\cap U_3## and a basis for ##U_2\cap U_3## and show that they are linearly independent.

This way you can see where ##U_1\cap U_2 =\{0\}## is needed, and the conclusion then is trivial.

If you start with a basis of ##U_3## you have no control over the subspaces.
 
  • Like
Likes JD_PM and PeroK
  • #4
That's true.
 
  • #5
fresh_42 said:
I would choose a basis for ##U_1\cap U_3## and a basis for ##U_2\cap U_3## and show that they are linearly independent.

This way you can see where ##U_1\cap U_2 =\{0\}## is needed, and the conclusion then is trivial.

If you start with a basis of ##U_3## you have no control over the subspaces.

This was really helpful, thanks.

I am a bit stuck, let me share.

Let ##\{u_1, \dots, u_m \}## be a basis for ##U_1 \cap U_3## and ##\{w_1, \dots, w_j \}## a basis for ##U_2 \cap U_3##. We want to prove that ##\{u_1, \dots, u_m, w_1, \dots, w_j \}## is linearly independent.

Suppose that

$$a_1 u_1 + \dots a_m u_m + b_1 w_1 + \dots + b_j w_j = 0$$

Rearranging we see that ##b_1 w_1 + \dots + b_j w_j = -a_1 u_1 - \dots -a_m u_m## so ##b_1 w_1 + \dots + b_j w_j \in U_1\cap U_3##. Besides, ##\{w_1, \dots, w_j \}## is a basis for ##U_2 \cap U_3## and ##U_1\cap U_2 =\{0\}## so what we conclude of this reasoning is that ##w_1, \dots w_j \in U_3##.

Analogously, we can rearrange as ##a_1 u_1 + \dots +a_m u_m = -b_1 w_1 - \dots - b_j w_j## to see that ##u_1, \dots u_m \in U_3##.

But I do not see how this will lead us to conclude that ##a_i = 0## and ##b_i = 0##
 
  • #6
JD_PM said:
This was really helpful, thanks.

I am a bit stuck, let me share.

Let ##\{u_1, \dots, u_m \}## be a basis for ##U_1 \cap U_3## and ##\{w_1, \dots, w_j \}## a basis for ##U_2 \cap U_3##. We want to prove that ##\{u_1, \dots, u_m, w_1, \dots, w_j \}## is linearly independent.

Suppose that

$$a_1 u_1 + \dots a_m u_m + b_1 w_1 + \dots + b_j w_j = 0$$

Rearranging we see that ##b_1 w_1 + \dots + b_j w_j = -a_1 u_1 - \dots -a_m u_m## so ##b_1 w_1 + \dots + b_j w_j \in U_1\cap U_3##. Besides, ##\{w_1, \dots, w_j \}## is a basis for ##U_2 \cap U_3## and ##U_1\cap U_2 =\{0\}## so what we conclude of this reasoning is that ##w_1, \dots w_j \in U_3##.

Analogously, we can rearrange as ##a_1 u_1 + \dots +a_m u_m = -b_1 w_1 - \dots - b_j w_j## to see that ##u_1, \dots u_m \in U_3##.

But I do not see how this will lead us to conclude that ##a_i = 0## and ##b_i = 0##
Forget about ##U_3##. You have shown that there is a vector ##v##
$$
U_1 \supseteq U_1\cap U_3 \ni v:=a_1 u_1 + \dots +a_m u_m = -b_1 w_1 - \dots - b_j w_j \in U_2\cap U_3\subseteq U_2
$$
So ##v\in U_1\cap U_2=\{0\}.## What does this mean for the ##a_i## and ##b_j##? Here we need that the ##u_i## and the ##w_j## are lineraly independent.
 
  • Like
Likes JD_PM
  • #7
fresh_42 said:
Forget about ##U_3##. You have shown that there is a vector ##v##
$$
U_1 \supseteq U_1\cap U_3 \ni v:=a_1 u_1 + \dots +a_m u_m = -b_1 w_1 - \dots - b_j w_j \in U_2\cap U_3\subseteq U_2
$$
So ##v\in U_1\cap U_2=\{0\}.## What does this mean for the ##a_i## and ##b_j##? Here we need that the ##u_i## and the ##w_j## are lineraly independent.

Oh so we have shown that ##v \in U_1\cap U_2## and we are given that ##U_1\cap U_2 = \{0\}##. We know that the ##u_i## and the ##w_j## are linearly independent so the only way that ##v=0## is for ##a_i = 0## and ##b_j = 0## and we are done (?).
 
  • #8
JD_PM said:
Oh so we have shown that ##v \in U_1\cap U_2## and we are given that ##U_1\cap U_2 = \{0\}##. We know that the ##u_i## and the ##w_j## are linearly independent so the only way that ##v=0## is for ##a_i = 0## and ##b_j = 0## and we are done (?).
Almost. Now you know that there are ##m+j## many linearly independent vectors, the ##u_i## and the ##w_i##. But everything takes place in ##U_3## and there can only be ##\dim U_3## many of them.
 
  • #9
Alternatively, a proof by contradiction should be fairly simple. If the sum of the dimensions of two subspaces is greater than the dimension of the space, then the subspaces must have a vector in common.
 
  • Like
Likes JD_PM
  • #10
fresh_42 said:
Almost. Now you know that there are ##m+j## many linearly independent vectors, the ##u_i## and the ##w_i##. But everything takes place in ##U_3## and there can only be ##\dim U_3## many of them.

So from this argument we see that ##\dim (U_1 \cap U_3) + \dim (U_2 \cap U_3) = m+j \leq \dim U_3 ##

PeroK said:
Alternatively, a proof by contradiction should be fairly simple. If the sum of the dimensions of two subspaces is greater than the dimension of the space, then the subspaces must have a vector in common.

Might you please elaborate further? So that I see whether is more simple or not (to me at least).
 
  • #11
JD_PM said:
Might you please elaborate further? So that I see whether is more simple or not (to me at least).
The outline argument is that if you combine the subspace bases you have too many vectors for linear independence. And linear dependence leads to a common non-zero vector.

More generally, you need to become familiar with the relationship between a direct proof and a proof of the same thing by contradiction. This is back to the issue of basic mathematical techniques.
 
  • #12
JD_PM said:
Might you please elaborate further? So that I see whether is more simple or not (to me at least).
This would be a good example to practice. The arguments are all similar. Start with the contrary statement:
$$
\dim (U_1\cap U_3) + \dim (U_2\cap U_3) > \dim U_3
$$

The left-hand side is part of ##U_3##. Now if the sum of the dimensions is greater than the dimension of the surrounding space, what does that mean? Don't jump to conclusions, try to take one step after the other.
 
  • Like
Likes JD_PM
  • #13
Just a quick question regarding one of your previous comments.

fresh_42 said:
Almost. Now you know that there are ##m+j## many linearly independent vectors, the ##u_i## and the ##w_i##. But everything takes place in ##U_3## and there can only be ##\dim U_3## many of them.

By "everything takes place in ##U_3##" did you mean that the ##u_i## and the ##w_i## linearly independent vectors are in ##U_3##?

fresh_42 said:
Now if the sum of the dimensions is greater than the dimension of the surrounding space, what does that mean? Don't jump to conclusions, try to take one step after the other.

Alright, let's go step by step.

This means to me that the sum is no longer ##\subseteq U_3##
 
  • #14
No. It means that something had to be counted at least twice.

We have ##(U_1\cap U_3) + (U_2\cap U_3) \subseteq U_3##, which is why I said "everything takes place in ##U_3##.

If the sum of dimensions on the left is bigger than the number of maximal available linearly independent vectors in ##U_3##, then ##\{u_i\}\cup \{w_k\}## cannot be linearly independent. Hence we have a linear combination such that ...

(I'm not sure whether this is what @PeroK had in mind, but I try to run through it step by step.)
 
  • Like
Likes JD_PM
  • #15
fresh_42 said:
If the sum of dimensions on the left is bigger than the number of maximal available linearly independent vectors in ##U_3##, then ##\{u_i\}\cup \{w_k\}## cannot be linearly independent.

Hence we have a linear combination such that any ##x \in \{u_i\}\cup \{w_k\}## can be written as a linear combination of previous elements in ##\{u_i\}\cup \{w_k\}## (linear dependence lemma).

However, I do not see where you are driving me at...
 
  • #16
JD_PM said:
However, I do not see where you are driving me at...
I don't know either. I simply took the contrary condition and now I'm looking where we end up.

We have a non-trivial linear combination of zero, i.e. ##0=\sum_{i=1}^m a_iu_i +\sum_{k=1}^j b_kw_k##. Now at least one of the coefficients ##a_i## or ##b_k## is unequal zero, say ##a_1\neq 0## what we may assume, since otherwise we only change enumeration. Thus
$$
0\neq u_1+\sum_{i=2}^m a_1^{-1}a_iu_i = - \sum_{k=1}^j b_ka_1^{-1}w_k \in U_1\cap U_2 =\{0\}
$$

As I said, these are the same arguments written differently.
 

FAQ: Dimension statement about (finite-dimensional) subspaces

What is a dimension statement about finite-dimensional subspaces?

A dimension statement about finite-dimensional subspaces refers to the number of linearly independent vectors needed to span a given subspace. It is a measure of the size or "dimensionality" of the subspace.

How do you determine the dimension of a finite-dimensional subspace?

The dimension of a finite-dimensional subspace can be determined by finding the number of linearly independent vectors in the subspace. This can be done by using the rank-nullity theorem, which states that the dimension of a subspace is equal to the rank of its basis.

What does it mean for a subspace to be finite-dimensional?

A finite-dimensional subspace is one that can be spanned by a finite number of linearly independent vectors. This means that the subspace has a finite number of dimensions and can be fully described using a finite set of basis vectors.

Can a subspace have more than one dimension statement?

No, a subspace can only have one dimension statement. This is because the dimension of a subspace is a unique property that is determined by its basis and cannot be changed.

How does the dimension of a subspace relate to the dimension of its parent space?

The dimension of a subspace is always less than or equal to the dimension of its parent space. This is because a subspace is a subset of its parent space and cannot have more dimensions than the space it is contained in.

Similar threads

Replies
4
Views
2K
Replies
5
Views
1K
Replies
5
Views
3K
Replies
7
Views
2K
Replies
8
Views
2K
Replies
15
Views
2K
Back
Top