- #1
Kevin_H
- 5
- 0
During lecture, the professor gave us a theorem he wants us to prove on our own before he goes over the theorem in lecture.
Theorem: Let ##V_1, V_2, ... V_n## be subspaces of a vector space ##V##. Then the following statements are equivalent.
My Attempt: ## 1 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 1##
##1 \rightarrow 2##
1 state ##W=\sum V_i## is a direct sum. Then by definition there is a unique decomposition for ##\alpha \in W ## such that ## \alpha = \alpha_1 + \alpha_2 + ... + \alpha_n## where ##\alpha_i \in V_i## for ##i = 1, 2, ..., n.## Let ##\alpha = 0##, then it is necessary obvious ##\alpha_i = 0## for all ##i##. ##2 \rightarrow 3##
2 states there is a unique decomposition for ##0 = \alpha_1 + ... + \alpha_n## where ##\alpha_i \in V_i## for ##i = 1, 2, ..., n##. Suppose there exists ##x_i \neq 0 \in V_i \cap \sum_{j\neq i}V_j##. Then ##x_i = \sum_{j\neq i} x_j## for some ##x_j \in V_j##, hence ##x_i - \sum_{j\neq i} x_j = 0##. Since ##x_i \neq 0##, then ##x_j## can not be all zero. This contradicts the fact ##0 = \alpha_1 + ... + \alpha_n## is the unique decomposition of the zero vector. Therefore ##V_i \cap \sum_{j\neq i}V_j = \{0\}##. ##3 \rightarrow 4##
3 states ##V_i\cap\sum_{i\neq j}V_j =\{0\}## for ##i = 1, 2, ..., n##. This implies dim(##V_i\cap\sum_{i\neq j}V_j ##) = ##0##. Now by direct application of the dimensional formula, which states dim(##X+Y##) = dim(##X##) + dim(##Y##) - dim(##X\cap Y##). Then
\begin{eqnarray*}
\text{dim}(V_1+(V_2 + ... + V_n)) & = & \text{dim}(V_1) + \text{dim}(V_2 + (V_3 + ... + V_n)) - \text{dim}(V_1 \cap \sum_{2}^nV_j)\\
& = & \text{dim}(V_1) + \text{dim}(V_2) + \text{dim}(V_3 + (V_4 +... + V_n)) - \text{dim}(V_2 \cap \sum_{3}^nV_j)\\
\end{eqnarray*}
repeatedly applying the dimensional formula to dim(##V_i + V_{i + 1} + ... + V_{n}##) yields
\begin{eqnarray*}
\text{dim}(V_1+(V_2 + ... + V_n)) & = & \text{dim}(V_1) + \text{dim}(V_2) + ... + \text{dim}(V_n)\\
& = & \sum_{i = 1}^n\text{dim}(V_i)\\
\end{eqnarray*}
Where ##W = \sum_{i = 1}^n(V_i) ##
##4 \rightarrow 1##
4 states dim##W## = ##\sum##dim##V_i##. By direct consequence of the dimensional formula, we know ##W = \sum_{i=1}^nV_i = \{\alpha = \alpha_1 + \alpha_2 + ... + \alpha_n \in V: \alpha_i \in V_i \text{for } i = 1,..., n\}##. We seek to show ##\forall \alpha \in W##, there exists a unique decomposition. By hypothesis, dim(##W) = m ## and dim(##V_i) = m_i## where ##m = \sum_{i = 1}^nm_i##. Now, each ##V_i## has a basis ##\Lambda_i## with ##m_i## linearly independent vectors. Since ##\alpha_i \in V_i##, there exists a unique linear combination ##\alpha_i = \sum_{k=1}^{m_i}c_{i,k}\beta_{i,k}## where ##c_{i,k}## is a scalar in the field and ##\beta_{i,k} \in \Lambda_i##. Thus ##\alpha \in W## can be written as
\begin{eqnarray*}
\alpha & = & \alpha_1 + \alpha+2 + ... + \alpha_n\\
& = & (\sum_{k=1}^{m_1}c_{1,k}\beta_{1,k}) + (\sum_{k=1}^{m_2}c_{2,k}\beta_{2,k}) + ... + (\sum_{k=1}^{m_n}c_{n,k}\beta_{n,k})
\end{eqnarray*}
It follows by hypothesis that ##\alpha## is composed of ##m = m_1 + ... + m_n## linearly independent vectors. Thus ##\alpha## is indeed a unique decomposition ##\alpha = \alpha_1 + \alpha_2 + ... + \alpha_n## where ##\alpha_i \in V_i## for ##i = 1, 2, ..., n##; therefore, ##W = \sum_{i = 1}^nV_i## is a direct sum.
Since ##1 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 1##, then all statements are equivalent. _________________________Now I feel like my proof overall, especially ##4 \rightarrow 1##, could be improved upon. I wanted to ask if you all have any suggestions on how I can do to make the proof better? Are there any logical errors? Is there an alternative way to prove this? I appreciate any feedback or criticism. Thank You for your time and have a wonderful day.
Theorem: Let ##V_1, V_2, ... V_n## be subspaces of a vector space ##V##. Then the following statements are equivalent.
- ##W=\sum V_i## is a direct sum.
- Decomposition of the zero vector is unique.
- ##V_i\cap\sum_{i\neq j}V_j =\{0\}## for ##i = 1, 2, ..., n##
- dim##W## = ##\sum##dim##V_i##
- Definition of Basis
- Dimensional Formula
- Definition of Direct Sum
My Attempt: ## 1 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 1##
##1 \rightarrow 2##
1 state ##W=\sum V_i## is a direct sum. Then by definition there is a unique decomposition for ##\alpha \in W ## such that ## \alpha = \alpha_1 + \alpha_2 + ... + \alpha_n## where ##\alpha_i \in V_i## for ##i = 1, 2, ..., n.## Let ##\alpha = 0##, then it is necessary obvious ##\alpha_i = 0## for all ##i##. ##2 \rightarrow 3##
2 states there is a unique decomposition for ##0 = \alpha_1 + ... + \alpha_n## where ##\alpha_i \in V_i## for ##i = 1, 2, ..., n##. Suppose there exists ##x_i \neq 0 \in V_i \cap \sum_{j\neq i}V_j##. Then ##x_i = \sum_{j\neq i} x_j## for some ##x_j \in V_j##, hence ##x_i - \sum_{j\neq i} x_j = 0##. Since ##x_i \neq 0##, then ##x_j## can not be all zero. This contradicts the fact ##0 = \alpha_1 + ... + \alpha_n## is the unique decomposition of the zero vector. Therefore ##V_i \cap \sum_{j\neq i}V_j = \{0\}##. ##3 \rightarrow 4##
3 states ##V_i\cap\sum_{i\neq j}V_j =\{0\}## for ##i = 1, 2, ..., n##. This implies dim(##V_i\cap\sum_{i\neq j}V_j ##) = ##0##. Now by direct application of the dimensional formula, which states dim(##X+Y##) = dim(##X##) + dim(##Y##) - dim(##X\cap Y##). Then
\begin{eqnarray*}
\text{dim}(V_1+(V_2 + ... + V_n)) & = & \text{dim}(V_1) + \text{dim}(V_2 + (V_3 + ... + V_n)) - \text{dim}(V_1 \cap \sum_{2}^nV_j)\\
& = & \text{dim}(V_1) + \text{dim}(V_2) + \text{dim}(V_3 + (V_4 +... + V_n)) - \text{dim}(V_2 \cap \sum_{3}^nV_j)\\
\end{eqnarray*}
repeatedly applying the dimensional formula to dim(##V_i + V_{i + 1} + ... + V_{n}##) yields
\begin{eqnarray*}
\text{dim}(V_1+(V_2 + ... + V_n)) & = & \text{dim}(V_1) + \text{dim}(V_2) + ... + \text{dim}(V_n)\\
& = & \sum_{i = 1}^n\text{dim}(V_i)\\
\end{eqnarray*}
Where ##W = \sum_{i = 1}^n(V_i) ##
##4 \rightarrow 1##
4 states dim##W## = ##\sum##dim##V_i##. By direct consequence of the dimensional formula, we know ##W = \sum_{i=1}^nV_i = \{\alpha = \alpha_1 + \alpha_2 + ... + \alpha_n \in V: \alpha_i \in V_i \text{for } i = 1,..., n\}##. We seek to show ##\forall \alpha \in W##, there exists a unique decomposition. By hypothesis, dim(##W) = m ## and dim(##V_i) = m_i## where ##m = \sum_{i = 1}^nm_i##. Now, each ##V_i## has a basis ##\Lambda_i## with ##m_i## linearly independent vectors. Since ##\alpha_i \in V_i##, there exists a unique linear combination ##\alpha_i = \sum_{k=1}^{m_i}c_{i,k}\beta_{i,k}## where ##c_{i,k}## is a scalar in the field and ##\beta_{i,k} \in \Lambda_i##. Thus ##\alpha \in W## can be written as
\begin{eqnarray*}
\alpha & = & \alpha_1 + \alpha+2 + ... + \alpha_n\\
& = & (\sum_{k=1}^{m_1}c_{1,k}\beta_{1,k}) + (\sum_{k=1}^{m_2}c_{2,k}\beta_{2,k}) + ... + (\sum_{k=1}^{m_n}c_{n,k}\beta_{n,k})
\end{eqnarray*}
It follows by hypothesis that ##\alpha## is composed of ##m = m_1 + ... + m_n## linearly independent vectors. Thus ##\alpha## is indeed a unique decomposition ##\alpha = \alpha_1 + \alpha_2 + ... + \alpha_n## where ##\alpha_i \in V_i## for ##i = 1, 2, ..., n##; therefore, ##W = \sum_{i = 1}^nV_i## is a direct sum.
Since ##1 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 1##, then all statements are equivalent. _________________________Now I feel like my proof overall, especially ##4 \rightarrow 1##, could be improved upon. I wanted to ask if you all have any suggestions on how I can do to make the proof better? Are there any logical errors? Is there an alternative way to prove this? I appreciate any feedback or criticism. Thank You for your time and have a wonderful day.