How Does the Direct Sum Relate to Unique Decomposition in Vector Spaces?

In summary, a direct sum of vector spaces is a vector space formed by combining two or more vector spaces while preserving their individual structures. It differs from a direct product in that the elements are from the original vector spaces themselves and the operations are defined differently. The basis of a direct sum can be found by combining the bases of the individual vector spaces, and it can be infinite-dimensional. Some applications of direct sums include studying vector spaces with multiple structures, describing composite systems, and in computer science and other fields.
  • #1
Kevin_H
5
0
During lecture, the professor gave us a theorem he wants us to prove on our own before he goes over the theorem in lecture.

Theorem: Let ##V_1, V_2, ... V_n## be subspaces of a vector space ##V##. Then the following statements are equivalent.
  1. ##W=\sum V_i## is a direct sum.
  2. Decomposition of the zero vector is unique.
  3. ##V_i\cap\sum_{i\neq j}V_j =\{0\}## for ##i = 1, 2, ..., n##
  4. dim##W## = ##\sum##dim##V_i##
What I understand:
  • Definition of Basis
  • Dimensional Formula
  • Definition of Direct Sum

My Attempt: ## 1 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 1##

##1 \rightarrow 2##
1 state ##W=\sum V_i## is a direct sum. Then by definition there is a unique decomposition for ##\alpha \in W ## such that ## \alpha = \alpha_1 + \alpha_2 + ... + \alpha_n## where ##\alpha_i \in V_i## for ##i = 1, 2, ..., n.## Let ##\alpha = 0##, then it is necessary obvious ##\alpha_i = 0## for all ##i##. ##2 \rightarrow 3##
2 states there is a unique decomposition for ##0 = \alpha_1 + ... + \alpha_n## where ##\alpha_i \in V_i## for ##i = 1, 2, ..., n##. Suppose there exists ##x_i \neq 0 \in V_i \cap \sum_{j\neq i}V_j##. Then ##x_i = \sum_{j\neq i} x_j## for some ##x_j \in V_j##, hence ##x_i - \sum_{j\neq i} x_j = 0##. Since ##x_i \neq 0##, then ##x_j## can not be all zero. This contradicts the fact ##0 = \alpha_1 + ... + \alpha_n## is the unique decomposition of the zero vector. Therefore ##V_i \cap \sum_{j\neq i}V_j = \{0\}##. ##3 \rightarrow 4##
3 states ##V_i\cap\sum_{i\neq j}V_j =\{0\}## for ##i = 1, 2, ..., n##. This implies dim(##V_i\cap\sum_{i\neq j}V_j ##) = ##0##. Now by direct application of the dimensional formula, which states dim(##X+Y##) = dim(##X##) + dim(##Y##) - dim(##X\cap Y##). Then

\begin{eqnarray*}
\text{dim}(V_1+(V_2 + ... + V_n)) & = & \text{dim}(V_1) + \text{dim}(V_2 + (V_3 + ... + V_n)) - \text{dim}(V_1 \cap \sum_{2}^nV_j)\\
& = & \text{dim}(V_1) + \text{dim}(V_2) + \text{dim}(V_3 + (V_4 +... + V_n)) - \text{dim}(V_2 \cap \sum_{3}^nV_j)\\
\end{eqnarray*}
repeatedly applying the dimensional formula to dim(##V_i + V_{i + 1} + ... + V_{n}##) yields
\begin{eqnarray*}
\text{dim}(V_1+(V_2 + ... + V_n)) & = & \text{dim}(V_1) + \text{dim}(V_2) + ... + \text{dim}(V_n)\\
& = & \sum_{i = 1}^n\text{dim}(V_i)\\
\end{eqnarray*}
Where ##W = \sum_{i = 1}^n(V_i) ##

##4 \rightarrow 1##
4 states dim##W## = ##\sum##dim##V_i##. By direct consequence of the dimensional formula, we know ##W = \sum_{i=1}^nV_i = \{\alpha = \alpha_1 + \alpha_2 + ... + \alpha_n \in V: \alpha_i \in V_i \text{for } i = 1,..., n\}##. We seek to show ##\forall \alpha \in W##, there exists a unique decomposition. By hypothesis, dim(##W) = m ## and dim(##V_i) = m_i## where ##m = \sum_{i = 1}^nm_i##. Now, each ##V_i## has a basis ##\Lambda_i## with ##m_i## linearly independent vectors. Since ##\alpha_i \in V_i##, there exists a unique linear combination ##\alpha_i = \sum_{k=1}^{m_i}c_{i,k}\beta_{i,k}## where ##c_{i,k}## is a scalar in the field and ##\beta_{i,k} \in \Lambda_i##. Thus ##\alpha \in W## can be written as
\begin{eqnarray*}
\alpha & = & \alpha_1 + \alpha+2 + ... + \alpha_n\\
& = & (\sum_{k=1}^{m_1}c_{1,k}\beta_{1,k}) + (\sum_{k=1}^{m_2}c_{2,k}\beta_{2,k}) + ... + (\sum_{k=1}^{m_n}c_{n,k}\beta_{n,k})
\end{eqnarray*}
It follows by hypothesis that ##\alpha## is composed of ##m = m_1 + ... + m_n## linearly independent vectors. Thus ##\alpha## is indeed a unique decomposition ##\alpha = \alpha_1 + \alpha_2 + ... + \alpha_n## where ##\alpha_i \in V_i## for ##i = 1, 2, ..., n##; therefore, ##W = \sum_{i = 1}^nV_i## is a direct sum.

Since ##1 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 1##, then all statements are equivalent. _________________________Now I feel like my proof overall, especially ##4 \rightarrow 1##, could be improved upon. I wanted to ask if you all have any suggestions on how I can do to make the proof better? Are there any logical errors? Is there an alternative way to prove this? I appreciate any feedback or criticism. Thank You for your time and have a wonderful day.
 
Physics news on Phys.org
  • #2
It looks pretty good.
However ##4\to 1## is missing a piece. When you write
Kevin_H said:
It follows by hypothesis that ##\alpha## is composed of ##m = m_1 + ... + m_n## linearly independent vectors.
that statement is not supported by the assumption of (4), which is simply a statement about dimensions and says nothing (directly) about the relationships between the subspaces ##V_i##. We know by supposition that the vectors in each set ##\Lambda_i\equiv\{\beta_{i,1},...,\beta_{i,m_i}\}## are mutually independent, but not that the vectors in ##\Lambda_i## are independent of those in ##\Lambda_j## for ##i\neq j##.

I wonder whether the contrapositive might be an easier way to prove this. That is, prove that ##\neg 1\to\neg 4##. If you assume the sum is not direct it should be easy enough to identify a nonzero vector in the intersection of two subspaces which, by the dimensional formula, will entail that the dimension of the sum of subspaces is less than the sum of dimensions.
 

FAQ: How Does the Direct Sum Relate to Unique Decomposition in Vector Spaces?

What is a direct sum of vector spaces?

A direct sum of vector spaces is a vector space that is formed by combining two or more vector spaces in a way that preserves their individual structures. This means that the direct sum of vector spaces contains all the elements of the individual vector spaces, and the operations of addition and scalar multiplication are defined in a way that respects the structures of the original vector spaces.

How is a direct sum different from a direct product?

A direct sum and a direct product are both ways of combining vector spaces, but they have different properties. In a direct product, the elements are ordered pairs of elements from the individual vector spaces, while in a direct sum, the elements are from the individual vector spaces themselves. Also, the operations in a direct product are defined component-wise, while in a direct sum, they are defined in a way that preserves the structures of the individual vector spaces.

How do you find the basis of a direct sum of vector spaces?

To find the basis of a direct sum of vector spaces, you can combine the bases of the individual vector spaces. If the individual vector spaces have bases {v1, v2, ... , vn} and {w1, w2, ... , wm}, then the basis of the direct sum is {v1, v2, ... , vn, w1, w2, ... , wm}. This basis is linearly independent and spans the direct sum of the vector spaces.

Can a direct sum of vector spaces be infinite-dimensional?

Yes, a direct sum of vector spaces can be infinite-dimensional. This means that the individual vector spaces in the direct sum can also be infinite-dimensional. For example, the direct sum of the vector spaces R and R^2 is an infinite-dimensional vector space.

What are some applications of direct sums of vector spaces?

Direct sums of vector spaces have various applications in mathematics and other fields. In linear algebra, they are used to study vector spaces with multiple structures, such as the study of polynomials as both a vector space and a ring. In physics, direct sums of vector spaces are used to describe composite systems, such as the combination of multiple quantum systems. They also have applications in computer science, particularly in the study of circuits and networks.

Back
Top