Confusion about the Direct Sum of Subspaces

In summary, "Sheldon Axler's Linear Algebra Done Right, 3rd edition" defines the "internal direct sum" or "direct sum" as a sum of subspaces where each element can be uniquely written as a sum of elements from each subspace. The condition for a sum of subspaces to be an internal direct sum is that the only way to write the zero vector is by taking each element from each subspace to be zero. It is not possible for the subspaces to have elements in common other than the zero vector and still be an internal direct sum. This can be proven by showing that if any nonzero vector can be written by different sums, then the zero vector can also be written by different sums, making it
  • #1
Calculuser
49
3
In "Sheldon Axler's Linear Algebra Done Right, 3rd edition", on page 21 "internal direct sum", or direct sum as the author uses, is defined as such:

3hrth.PNG


Following that there is a statement, titled "Condition for a direct sum" on page 23, that specifies the condition for a sum of subspaces to be internal direct sum. In proof the author proves the uniqueness of this condition as far as I get, which is understandable, but I do not think that proves the statement itself:

4567e546.PNG


Question 1: How can we prove that checking if the only way to write ##0## vector as the sum of ##u_1+u_2+...+u_m## where each ##u_i\in U_i## is taking each ##u_i## equal to ##0##? is sufficient for a sum of subspaces to be internal direct sum?

Question 2: Is it possible for some of those subspaces ##U_1, U_2, . . . , U_m## have some elements in common other than ##0## vector while their summation still produces an internal direct sum? If not, why?
 
Physics news on Phys.org
  • #2
Calculuser said:
In "Sheldon Axler's Linear Algebra Done Right, 3rd edition", on page 21 "internal direct sum", or direct sum as the author uses, is defined as such:

View attachment 244008

Following that there is a statement, titled "Condition for a direct sum" on page 23, that specifies the condition for a sum of subspaces to be internal direct sum. In proof the author proves the uniqueness of this condition as far as I get, which is understandable, but I do not think that proves the statement itself:

View attachment 244010

Question 1: How can we prove that checking if the only way to write ##0## vector as the sum of ##u_1+u_2+...+u_m## where each ##u_i\in U_i## is taking each ##u_i## equal to ##0##? is sufficient for a sum of subspaces to be internal direct sum?
Indirect. Assume a vector ##u\neq 0## has two expressions and show that ##0## has two expressions, too, in that case.
In the other direction is trivial: if each vector can be expressed uniquely, then especially the zero vector.
Question 2: Is it possible for some of those subspaces ##U_1, U_2, . . . , U_m## have some elements in common other than ##0## vector while their summation still produces an internal direct sum? If not, why?
It's not possible. Why? See your proof of the answer to question 1.
 
  • #3
fresh_42 said:
Indirect. Assume a vector ##u\neq 0## has two expressions and show that ##0## has two expressions, too, in that case.
In the other direction is trivial: if each vector can be expressed uniquely, then especially the zero vector.

What we want to infer from statements some other statement by some inference rule is what happens in proofs as I try to detail as much as possible through my basic knowledge on propositional calculus.

If every element of ##U_1+U_2+...+U_m## can be uniquely written as a sum of ##u_1+u_2+...+u_m## where each ##u_i## is in ##U_i##, then ##U_1+U_2+...+U_m## is a direct sum. So, $$P\Longrightarrow Q \qquad [1]$$ where, $$P: \text{every element of } U_1+U_2+...+U_m \text{ can be uniquely written as a sum of } u_1+u_2+...+u_m \text{ where each } u_i \text{ is in } U_i$$ $$Q: U_1+U_2+...+U_m \text{ is a direct sum}$$
By modus ponens we would like to get ##Q##, $$P\Longrightarrow Q, P \vdash Q \qquad [2]$$
##P\Longrightarrow Q## is by definition true so that all we have to do is to derive ##P## statement as a sufficient condition to check. Hence, after testing ##P## is true we can infer ##Q##.

If we take the negation of statement ##P##, $$\lnot P: \text{ there exists some elements of } U_1+U_2+...+U_m \text{ that cannot be written uniquely as a sum of } u_1+u_2+...+u_m \text{ where each } u_i \text{ is in } U_i$$
Hence if we take ##v## from ##U_1+U_2+...+U_m##, which can be written in two different sums, $$v=u_1+u_2+...u_m, \text{ } u_i\in U_i \qquad [3]$$ $$v=w_1+w_2+...+w_m, \text{ } w_i\in U_i \qquad [4]$$
Subtracting ##[4]## from ##[3]##, $$0=(v_1-w_1)+(v_2-w_2)+...(v_m-w_m), \text{ } u_i, w_i \in U_i \qquad [5]$$
We can interpret ##[3]##, ##[4]## and ##[5]## as such: if we take some arbitrary vector ##v## other than ##0## (##v\neq 0##), we can use it to write ##0## vector from ##[3]## and ##[4]## as another sum shown in ##[5]## in addition to ##0=0+0+...+0##, which is always the case because ##0## is in every ##U_i## by definition of subspace; and the summation of ##0## where each ##0## taking from ##U_i## guarantees ##0## in ##U_1+U_2+...+U_m## by such a sum ##0=0+0+...+0##. In other words, we can always generate another sum of zero vector out of any vector different from zero vector can be written in different ways. Therefore we can phrase: if any nonzero vector can be written by different sums, then zero vector can be written by different sums. Contrapositively, this means if zero vector cannot be written by different sums, then any nonzero vector cannot be written by different sums. Representing this statement as, $$R_1\Longrightarrow R_2 \qquad [6]$$
where, $$R_1: \text{ any nonzero vector can be written by different sums}$$ $$R_2: \text{ zero vector can be written by different sums}$$
Thinking of ##R_1\Longrightarrow R_2## as a categorical proposition where ##R_1## is a set whose elements having property of nonzero vectors ##v## that can be written by at least two different sums; and ##R_2## is a set whose elements having property of zero vector that can be written by at least two different sums. In Venn diagram representation, $$R_1 \setminus R_2 = \emptyset$$
and the complement set these two ##R_1## and ##R_2## (##(R_1 \cup R_2)'##) represents the set of all vectors ##v##, ##0## included, having the property of a unique representation as a sum of ##u_1+u_2+...+u_m##. Therefore if there exists an element in ##R_2##, we can say not direct sum; or if not, then a direct sum.

Do these all make sense or not? Did I miss any point, skip any logical step, or such in all these?
 
  • #4
I don't understand what you are doing here. ##P \Longleftrightarrow Q## is the definition 1.40 of a direct sum as far as I can see what's in your pictures, so there is no work to do. What's left is to show, that already the zero vector does it.

Now as every vector can be uniquely written as sum of ##u_j##, this is also true for ##u=0##.

This leaves us with the task: If ##0_U=\sum_j 0_{U_j}## is the only way to write ##0_U \in U##, then there is only one way to write any other vector ##v \in U##, which you did ... somewhere hidden in the waterfall of letters.

You are correct to assume two sums for ##v## as in [3] and [4], and also to build the difference [5] (up to the typo that it should read ##u_j-w_j##). Now you only need two arguments:
  1. The ##U_j## are subspaces, so ##u_j-w_j \in U_j##.
  2. The zero vector has only one representation, namely ##0_U=\sum_j 0_{U_j}##.
Therefore ##u_j-w_j = 0_{U_j}## for all ##j##. End of proof.

You do not need all these things you have written. Just ##[3],[4],[5]## and the two arguments above. I assume this is somewhere in what you wrote, but I have difficulties to find it.
 

FAQ: Confusion about the Direct Sum of Subspaces

What is the direct sum of subspaces?

The direct sum of subspaces is a mathematical concept that involves combining two or more subspaces to create a larger subspace. It is denoted by a plus sign (+) between the subspaces, and it represents the sum of all possible combinations of vectors from each subspace.

How is the direct sum different from the sum of subspaces?

The direct sum is different from the sum of subspaces in that it includes all possible combinations of vectors from each subspace, while the sum of subspaces only includes the vectors that can be created by adding vectors from each subspace together. In other words, the direct sum is a larger subspace that includes all possible vectors, while the sum of subspaces is a smaller subspace that includes only some of the possible vectors.

Can the direct sum of subspaces be infinite?

Yes, the direct sum of subspaces can be infinite. This can happen when the subspaces involved are infinite themselves, such as in the case of infinite-dimensional vector spaces. In these cases, the direct sum is also an infinite-dimensional vector space.

How is the direct sum related to linear independence?

The direct sum of subspaces is closely related to linear independence. If the subspaces involved are linearly independent, then their direct sum will result in a larger subspace that contains all possible combinations of vectors from each subspace. However, if the subspaces are not linearly independent, their direct sum will result in a smaller subspace that is not as useful for creating new vectors.

Can the direct sum of subspaces be used in practical applications?

Yes, the direct sum of subspaces has many practical applications in fields such as linear algebra, physics, and engineering. It is used to model complex systems, solve systems of linear equations, and analyze data. It is also an important concept in understanding vector spaces and their properties.

Similar threads

Replies
7
Views
2K
Replies
6
Views
2K
Replies
2
Views
1K
Replies
6
Views
3K
Replies
4
Views
2K
Back
Top