Dimensionality of the sum of subspaces

In summary, we are given two subspaces, ## \mathbb {V}_1^{n_1} ## and ## \mathbb {V}_2^{n_2} ##, with the property that any element of ## \mathbb {V}_1^{n_1} ## is orthogonal to any element of ## \mathbb {V}_2^{n_2} ##. We need to show that the dimensionality of the sum of these two subspaces, ## \mathbb {V}_1^{n_1} + \mathbb {V}_2^{n_2} ##, is equal to ## n_1 + n_2 ##. To prove this, we can express any element
  • #36
fresh_42 said:
Next you used without mention ##\langle u_1\,|\,u_i\rangle = \delta_{1i}\,##. Why are you allowed to do so? Nowhere up to now, i.e. in this post, is anything written about ##u_i\perp u_j \quad (i\neq j) ## and the problem statement doesn't say it either! So until now, there is no argument, why ## u_1##⊥span{## u_2,…,u_p##}.

Where did I use the above? Please mention this, too.

Thanks for the points.
 
Physics news on Phys.org
  • #37
Mark44 said:
@Pushoam, back in post #16 you wrote this:
Maybe you get the distinction between a linearly independent set of vectors and a linearly dependent set, but it wasn't clear in your equation 5.
Basis vectors are linearly independent and definition of linear independent vectors is given as:
## \Sigma_{i=1}^n c_i |u_i \rangle = 0## means all the coefficients## \{c_i\} = 0.##
This is what I wrote in eqn. (5).
Then why do you say that definition of linearly independent vectors is not clear in eqn. (5)?
 
  • #38
Pushoam said:
Basis vectors are linearly independent and definition of linear independent vectors is given as:
## \Sigma_{i=1}^n c_i |u_i \rangle = 0## means all the coefficients## \{c_i\} = 0.##
This is what I wrote in eqn. (5).
No, this is what you wrote, which I quoted exactly as you wrote it
Pushoam said:
Considering the following linear combination of basis vectors of ##V_1##,
##| c_1 u_1 + c_2u_2 +... +c_p u_p \rangle =0 \Rightarrow \{c_i\}= 0, i=1,2,...p ## ...(5)

My example set A matches exactly in substance what you have here, yet my example is a linearly dependent set. For any set of vectors, if ##c_1v_1 + \dots + c_nv_n = 0##, it's always true that ##c_1 = c_2 = \dots = c_n = 0## is a solution, whether the vectors are linearly dependent or linearly independent. The difference is that for linearly independent vectors, the trivial solution is the only solution.
Pushoam said:
Then why do you say that definition of linearly independent vectors is not clear in eqn. (5)?
See above.
 
  • #39
Aren't ## \Sigma_{i=1}^n c_i |u_i \rangle = 0## means all the coefficients## \{c_i\} = 0.## and ## | c_1 u_1 + c_2u_2 +... +c_p u_p \rangle =0 \Rightarrow \{c_i\}= 0, i=1,2,...p## two expressions for the same thing?
I had this understanding that both expressions define linear independence of vectors.
 
  • #40
Pushoam said:
Where did I use the above? Please mention this, too.
Pushoam said:
Taking dot product with ##\langle u_1|## gives ##c_1 = - \langle u_1 | c_2u_2 +... +c_p u_p\rangle = 0## ...(6)
This is not correct without assuming ##u_1 \perp u_i \quad (i>1)\text{ and } \langle u_1\,|\,u_1 \rangle=1\,.## And there was nowhere an argument why it should be correct. And as I elaborated, it's even plain wrong if you apply Gram-Schmidt after the renumbering which leads to the choice of ##u_1##, because Gram-Schmidt might also use a renumbering and you have to rule out that both of them doesn't conflict with the other. Gram-Schmidt makes a choice of a basis. After that, another choice must ensure to be allowed. The other way around it is not obvious, that Gram-Schmidt is applicable on a basis, which already has been specified (via numbering).
Pushoam said:
Aren't ## \Sigma_{i=1}^n c_i |u_i \rangle = 0## means all the coefficients## \{c_i\} = 0.##
Yes, but not for any vectors, only for linear independent ones. And you assumed ##u_1## to be linear dependent at one point.
... and ## | c_1 u_1 + c_2u_2 +... +c_p u_p \rangle =0 \Rightarrow \{c_i\}= 0, i=1,2,...p## two expressions for the same thing?
Yes.
I had this understanding that both expressions define linear independence of vectors.
Yes, that is the definition. But as you assumed ##u_1 \in \operatorname{span}\{\,u_i,v_j\,\}## things are not automatically obvious. Your mistake is, that you used the same coefficients ##c_i## to define linear independence, which by the way is no need to do, and next for the expression ##u_1=\sum c_iu_i +\sum d_jv_j## and that is always a bad idea to name two potentially different things by the same name. I wouldn't let you go with it.
 
  • Like
Likes Pushoam
  • #41
Pushoam said:
... and ## | c_1 u_1 + c_2u_2 +... +c_p u_p \rangle =0 \Rightarrow \{c_i\}= 0, i=1,2,...p## two expressions for the same thing?
fresh_42 said:
Yes.
I'm going to disagree here, if only because so many students don't grasp the difference between linear independence and linear dependence. The equation above is technically correct, but students commonly don't realize that the equation ## | c_1 u_1 + c_2u_2 +... +c_p u_p \rangle =0## always has at least one solution, whether or not the vectors are linearly independent.
For example, using my earlier example (set A), if we start with this equation, ##a_1<1, 0, 0> + a_2<0, 1, 0> + a_3<1, 1, 0> = 0##, by simple substitution we see that there is a solution ##a_1 = a_2 = a_3 = 0##. Can you conclude that the three vectors on the left side of this equation are linearly independent? I hope not.

This is why most linear algebra textbooks are careful to define linear independence by saying that there is only the trivial solution and no others.
 
  • Like
Likes Pushoam
  • #42
Right. The essential part is ##\Longrightarrow## and often gets lost. The entire definition is practically within this arrow.
 
  • Like
Likes Pushoam
  • #43
Mark44 said:
It's not clear to me exactly what PeroK's objection was, but it might be this idea:

Here's an example to illustrate the logic problem:

Let ##\{ u_i \}## be a basis. Therefore:

##\Sigma c_i u_i = 0 \ \Rightarrow c_i = 0##

And, in particular, ##c_1 = 0## (Equation (1))

Let ##u## be any vector, with ##u = \Sigma c_i u_i##

If ##c_1 \ne 0##, then we have a contradiction to equation (1).

This sort of thing, in my experience, is a common mistake.

I think @Pushoam understands this now?
 
  • Like
Likes Pushoam
  • #44
PeroK said:
Here's an example to illustrate the logic problem:

Let ##\{ u_i \}## be a basis. Therefore:

##\Sigma c_i u_i = 0 \ \Rightarrow c_i = 0##

And, in particular, ##c_1 = 0## (Equation (1))

Let ##u## be any vector, with ##u = \Sigma c_i u_i##

If ##c_1 \ne 0##, then we have a contradiction to equation (1).

This sort of thing, in my experience, is a common mistake.

I think @Pushoam understands this now?

Thanks for pointing it out. I shall be careful regarding notation as it might become hard to see notational mistake later.
 
  • #45
Pushoam said:
Thanks for pointing it out. I shall be careful regarding notation as it might become hard to see notational mistake later.
Thou shalt not use the same letter at different occasions!
 
  • Like
Likes Pushoam
  • #46
Thanks @ fresh_42.
 
Back
Top