Cartesian sum of subspace and quotient space isomorphic to whole space

In summary, Let ##n=\dim X## and ##m=\dim Y##. Define a basis for ##X: y_1,...,y_m,z_{m+1},...,z_n##. The first ##m## terms are a basis for ##Y##. The remaining ##n-m## terms are a basis for its complement w.r.t ##X##. Let's call it ##Z##. ##X## is the direct sum of ##Y## and ##Z##; denote it as ##X=Y+Z##. In other words, you can express any ##x\in X## uniquely as ##x=y+z## for some ##
  • #1
Eclair_de_XII
1,083
91
Homework Statement
Let ##X## be a vector space and ##Y## a linear subspace. Prove that the Cartesian sum ##Y\oplus X/Y## is isomorphic to ##X##.
Relevant Equations
Isomorphic:
Two spaces A and B are isomorphic to each other if there is a one-to-one and onto function f defined from A to B or vice versa.

Quotient space:
##X/Y=\{x\in X:\exists\,x_0 \in X \textrm{ such that } x-x_0=y\textrm{ for some }y\in Y\}##

Cartesian sum:
Let ##A,B## be two sets. Then the Cartesian sum of those sets is denoted as ##A\oplus B## and comprises of elements of the form ##(a,b)## where ##a\in A,b\in B##. Addition and multiplication are defined component-wise.

Theorem:
##\dim Y\oplus {X/Y}=\dim X##
Let ##n=\dim X## and ##m=\dim Y##.

Define a basis for ##X: y_1,...,y_m,z_{m+1},...,z_n##. The first ##m## terms are a basis for ##Y##. The remaining ##n-m## terms are a basis for its complement w.r.t ##X##. Let's call it ##Z##. ##X## is the direct sum of ##Y## and ##Z##; denote it as ##X=Y+Z##. In other words, you can express any ##x\in X## uniquely as ##x=y+z## for some ##y\in Y,z\in Z##.

Define a linear transformation ##T:X\rightarrow (Y\oplus X/Y)## by ##T(x)=(y,z)## where ##x=y+z##. We prove that it is one-to-one and unique.

Suppose for some ##x_1\in X##, ##T(x_1)=T(x)##. Then ##(y_1,z_1)=(y,z)## and in turn, ##(y_1-y,z_1-z)=(0,0)##; ##y_1=y## and ##z_1=z## as a result. It follows that ##x_1=y_1+z_1=y+z=x##.

Let ##(y_0,z_0)\in Y\oplus Z##. Since ##y_0\in Y## and ##z_0\in Z##, then ##y_0+z_0\in Y+Z=X##. Hence, there is a ##x_0\in X## such that ##x_0=y_0+z_0##. ##T(x_0)=(y_0,z_0)## as a result.

Hence, ##Y\oplus X/Y## is isomorphic to ##X##.
 
Last edited:
Physics news on Phys.org
  • #2
Your definition of quotient space doesn't look right.

For one thing, every ##x \in X## is trivially in ##\{x \in X : \exists x_0 \in X \text{ such that }x - x_0 = y \text{ for some }y \in Y\}##: just take ##x_0 = x## and ##y = 0##.

More importantly, the elements of ##X/Y## are subsets of ##X## (namely, cosets of ##Y##), not elements of ##X##.

The correct definition is: ##X / Y = \{x + Y : x \in X\}##, where ##x + Y = \{x + y : y \in Y\}##.
 
  • Like
Likes Eclair_de_XII and etotheipi
  • #3
It must be a general fact I guess. Independently on finite or infinite dimensions. There is a subspace ##W\subset X## such that ##X=Y\oplus W##. Let ##p:X\to X/Y## stand for the natural projection. Then ##p\mid_W:W\to X/Y## is an isomorphism. Is not it?
 
  • #4
The statement doesn't mention finite dimensions!

You can work with a basis in finite dimensions.
$$
\underbrace{\underbrace{y_1,\ldots,y_m}_{\text{basis of }Y},x_{m+1},\ldots,x_n}_{\text{basis of }X}
$$
Now ##n-m=\dim X/Y=\dim U## where ##U:=\operatorname{span}\{x_{m+1},\ldots,x_n\}##. Since all vector spaces of the same dimension are isomorphic, we get ##X=Y\oplus U \cong X\oplus X/Y## as required.

But what if we cannot numerate a base, e.g. in the vector space of continuous functions? How could you proceed with infinite dimmensions?
 
  • #5
fresh_42 said:
How could you proceed with infinite dimmensions?
who is this addressed to ?
 
  • Like
Likes Delta2
  • #6
wrobel said:
who is this addressed to ?
The OP of course.
 
  • #7
fresh_42 said:
vector space of continuous functions

Okay, let ##Y=\{\textrm{constant functions}\}## and let ##Z=\{\textrm{functions that depend on }t\}##.

Let ##x\in X=C^0(D,R)## for some domain and range, ##D,R##. Then ##x(t)=y(t)+z(t)## for unique functions ##y,z##.

##(y_0,\{z_0+y:y\in Y\})\leftarrow x_0##

I'm not really sure what to do, here. Personally, I feel like you could just map ##x_0## to its y-projection in the first coordinate and then associate ##x_0## to the set containing the sum of ##x_0## and some ##y## in the second coordinate. I'm not sure that I'm right, though. My main concern is that there does not seem to be any defined addition or multiplication for the second coordinate of the Cartesian sum.

fresh_42 said:
all vector spaces of the same dimension are isomorphic

You mean because you can map each basis element of one space to exactly one basis element of another, then map it back again?
 
Last edited:
  • #8
Eclair_de_XII said:
You mean because you can map each basis element of one space to exactly one basis element of another, then map it back again?
Yes. If we have bases ##\operatorname{span}\{u_k\}\cong\operatorname{span}\{v_k\}## then we can map ##u_k\longmapsto v_k \longmapsto u_k##. In case one of the spaces is ##X/Y## then a basis looks like ##x_k+Y## and we can map ##x_k+Y \longmapsto v_k \longmapsto x_k+Y.##

The example with ##X=C^0(D,R)## and ##Y=\{f:D\longrightarrow R\,|\,\exists \,r_0\in D \, : \,f(d)=r_0\,\forall\,d\in D\}## the constant functions (forget ##Z##) can be used for the general problem.

Generally we have
\begin{align*}
Y&\stackrel{\iota}{\rightarrowtail} X \stackrel{\pi}{\twoheadrightarrow} X/Y \\
y &\mapsto y \\
\phantom{y}&\phantom{\hookrightarrow\;\;}x\mapsto x+Y
\end{align*}
What we want to find is a injective, linear map ##\varphi\, : \,X/Y \longrightarrow X## such that ##\pi\circ\varphi=\operatorname{id}_{X/Y}\,.##

Of course we can simply say ##\varphi(x+Y):=x##, but the difficulty is, that ##x+Y## isn't unique. All other ##x+y## with any ##y\in Y## are equivalent to ##x,## i.e. ##x+Y=(x+y)+Y.## So which one shall we use to define ##\varphi:## ##\varphi(x+Y)=x## or ##\varphi(x+Y )=x+y\;?## This means we have to show that we have a well-defined ##\varphi .## Linearity and ##\pi\circ\varphi=\operatorname{id}_{X/Y}## are easy, but why is it well-defined and injective?
 
  • #9
Eclair_de_XII said:
Okay, let ##Y=\{\textrm{constant functions}\}## and let ##Z=\{\textrm{functions that depend on }t\}##.

Let ##x\in X=C^0(D,R)## for some domain and range, ##D,R##. Then ##x(t)=y(t)+z(t)## for unique functions ##y,z##.

This is false. For example, 1+t can have y=1, z=t or y=2, z=t-1. You don't have uniqueness.

If you define Z to be functions for which z(0)=0 or something, this might be closer to true.
 
  • Like
Likes Eclair_de_XII
  • #10
Note that ##Z = \{\text{functions that depend on }t\}## is not a subspace. The sum of two non-constant functions (e.g. ##t## and ##-t##) can be constant, and multiplying a non-constant function by the scalar zero gives you a constant.
 
  • Like
Likes Eclair_de_XII
  • #11
fresh_42 said:
why is it well-defined and injective

Well-defined: In the case of ##\varphi(x+Y)=x+y##, you are just choosing an element from the set ##x+Y##. Whereas in the case of ##\varphi(x+Y)=x##, what you're doing is just selecting one of the summands of any given element in ##x+Y##.

This is how I see it: let's say we have a 2-D plane. In the latter case, you are choosing an ##x## on the x-axis and not bothering to choose a ##y## on the y-axis. So it's essentially a vertical line, which is not a function, let alone one that is well-defined. Analogously, this vertical line is ##x+Y##. In the former case, you are choosing an ##x## and some ##y## on the appropriate axes. This is a point on the 2-D plane and it is a well-defined function.

As for why it is injective... Suppose ##\varphi(x_1+Y)=\varphi(x_2+Y)##. Then ##x_1+y=x_2+y##. ##X## is closed under the addition, so ##x_1=x_2##.
 
  • #12
Eclair_de_XII said:
Well-defined: In the case of ##\varphi(x+Y)=x+y##, you are just choosing an element from the set ##x+Y##. Whereas in the case of ##\varphi(x+Y)=x##, what you're doing is just selecting one of the summands of any given element in ##x+Y##.

This is how I see it: let's say we have a 2-D plane. In the latter case, you are choosing an ##x## on the x-axis and not bothering to choose a ##y## on the y-axis. So it's essentially a vertical line, which is not a function, let alone one that is well-defined. Analogously, this vertical line is ##x+Y##. In the former case, you are choosing an ##x## and some ##y## on the appropriate axes. This is a point on the 2-D plane and it is a well-defined function.
These ideas use what I was thinking about, too. Either use linear independence: start with ##0+Y##, then for ##x_1\notin Y## choose ##x_1##, then for ##x_2\notin \operatorname{span}\{Y,x_1\}## choose ##x_2## etc. This comes down to the axiom of choice.

The other geometric view is when we have angles and lengths. Then we can choose orthogonal components as long as there are such.

I can't think of a direct proof at the moment, and I'm not sure there is one, although I suspect it.
As for why it is injective... Suppose ##\varphi(x_1+Y)=\varphi(x_2+Y)##. Then ##x_1+y=x_2+y##. ##X## is closed under the addition, so ##x_1=x_2##.
This is a false argument as you seem to use injectivity to show it. Your "Then" is unclear to me. However, you can use the property ##\pi\varphi=1_{X/Y}##. Given ##\varphi(x_1+Y)=\varphi(x_2+Y)## we get ##x_1+Y=\pi(\varphi(x_1+Y))=\pi(\varphi(x_2+Y))=x_2+Y##.

We had to show that ##\varphi\, : \, X/Y \longrightarrow X## isinjective, that is: If the images ##\varphi(x_i+Y)## of two elements are equal, then the elements themselves must have been equal, that is ##x_1+Y=x_2+Y##. The ##x_i## do not have to be equal. Also you can conclude ##a=b \Longrightarrow f(a)=f(b)## for every well defined function ##f##, but not the other way around: ##f(a)=f(b) \nrightarrow a=b##. This injectivity which we wanted to prove, so we cannot just cancel ##\varphi## as you did. This would be the conclusion, not the assumption. Applying ##\pi## however is allowed.
 
  • #13
I thought that the problem had actually been solved in #3. Strange
 
  • #14
wrobel said:
I thought that the problem had actually been solved in #3. Strange
I don't think rephrasing the problem can be considered a solution.
 
  • #15
fresh_42 said:
I don't think rephrasing the problem can be considered a solution.
I don't think it was a rephrasing.
I implicitly applied a well-known theorem:) I expected that the prompt would be understood

The well-known theorem is as follows.
Theorem. Every subspace ##Y\subset X## of a vector space ##X## has a compliment space ##W\subset X,\quad X=Y\oplus W##.
In the infinite dimensional case ,this follows from the Zorn lemma. I do not think that for infinite dimensional case the problem under discussion can be solved without the Choice axiom.
 
Last edited:
  • Like
Likes Delta2
  • #16
fresh_42 said:
This injectivity which we wanted to prove, so we cannot just cancel ##\varphi## as you did. This would be the conclusion, not the assumption. Applying however is allowed.

What I did was apply the definition of ##\varphi(x+Y)## to the two vectors ##x_1,x_2##. You defined ##\varphi(x+Y)=x+y## for some ##y##. What I did was apply ##\varphi## to ##x_1+Y## and ##x_2+Y##; I got ##\varphi(x_1+Y)=x_1+y## and ##\varphi(x_2+Y)=x_2+y##. Then I equated the two expressions on the right, then canceled the ##y## on either side to arrive at the conclusion: ##x_1=x_2##. I'm sorry for not being clearer.
 
  • #17
Eclair_de_XII said:
What I did was apply the definition of ##\varphi(x+Y)## to the two vectors ##x_1,x_2##. You defined ##\varphi(x+Y)=x+y## for some ##y##. What I did was apply ##\varphi## to ##x_1+Y## and ##x_2+Y##; I got ##\varphi(x_1+Y)=x_1+y## and ##\varphi(x_2+Y)=x_2+y##. Then I equated the two expressions on the right, then canceled the ##y## on either side to arrive at the conclusion: ##x_1=x_2##. I'm sorry for not being clearer.

Eclair_de_XII said:
As for why it is injective... Suppose ##\varphi(x_1+Y)=\varphi(x_2+Y)##. Then ##x_1+y=x_2+y##. ##X## is closed under the addition, so ##x_1=x_2##.

No. You only have ##x_1-x_2\in Y.## The images live in ##X##, so ##Y\neq 0## here.

If we have ##\varphi(x_1+Y)=x_1+y_1=\varphi(x_2)=x_2+y_2## because we cannot know which ##x_i## represents the coset, then we cannot conclude ##x_1=x_2.## We only know ##x_1-x_2\in Y##. You have to use the property ##\pi\varphi=1_{X/Y}## because this is what is required from ##\varphi.## Otherwise any linear function ##X/Y\longrightarrow X## would do, which is not the case.
 
  • #18
If we allow the big guns, there is a result that every short exact sequence if vector spaces splits. Think this is what @wrobel was mentioning.
 
  • #19
What I mentioned I explained in #15.
Once again: without the Choice axiom, you will not solve the infinite dimensional version of the problem .
 
  • #20
WWGD said:
If we allow the big guns, there is a result that every short exact sequence if vector spaces splits. Think this is what @wrobel was mentioning.
Yes, but that is what had to be proven in my opinion.
wrobel said:
What I mentioned I explained in #15.
Once again: without the Choice axiom, you will not solve the infinite dimensional version of the problem .
Thanks, that was what I wasn't sure about.
 
  • #21
I'm confused about what the function ##\pi:X\longrightarrow X/Y## is supposed to do.

I understand that it is a projection. I have normally worked with spaces that consist of finitely many components; the projections I am familiar with usually map elements of such spaces to only one of the components (i.e. mapping a vector in ##\mathbb{R}^2## to either its x- or y-coordinate).

But here, we have a vector ##x\in X## and the space ##X/Y## consists of cosets ##x'+Y=\{x'+y:y\in Y\}##. In short, I am confused as to how to decompose the former into the latter.
 
  • #22
I'm trying to toy around with these ideas, right now, with actual spaces.

I set:

##X=\mathbb{R}^2##
##Y=\{(x,x):x\in \mathbb{R}\}##

##X/Y=\{(x,y)+Y:(x,y)\in X\}##
where
##(x,y)+Y=\{(x,y)+p:p\textrm{ is a point on the line }y = x\}##

Right now, it just looks like ##(x,y)+Y## is an exact copy of ##Y##, except shifted by ##(x,y)##. Also, for all but one ##(x,y)##, ##(x,y) + Y## does not seem to be a subspace of ##X## for the sole reason that it does not have a zero. For example, we can represent ##y=x+1## as:

##(-1,0)+Y## and ##(0,1)+Y##
but
##(-1,0)\neq (0,1)##

which seems to align with what fresh42 said earlier about the non-correlation between ##x_1+Y=x_2+Y## and ##x_1=x_2##. Also, the quotient space is like a of collection of the copies of the graph of ##1_\mathbb{R}## shifted by some constant.

If I had to take a guess, ##\pi## is defined something like ##\pi:(x,y)+p\mapsto (x,y)\mapsto (x,y)+Y##.
 
Last edited:
  • #23
Eclair_de_XII said:
I'm confused about what the function ##\pi:X\longrightarrow X/Y## is supposed to do.

I understand that it is a projection. I have normally worked with spaces that consist of finitely many components; the projections I am familiar with usually map elements of such spaces to only one of the components (i.e. mapping a vector in ##\mathbb{R}^2## to either its x- or y-coordinate).

But here, we have a vector ##x\in X## and the space ##X/Y## consists of cosets ##x'+Y=\{x'+y:y\in Y\}##. In short, I am confused as to how to decompose the former into the latter.
What is the most natural function from ##X## to ##X/Y##? (Hint: if ##x \in X##, then which coset contains ##x##?)
 
  • #24
Well, ##x+Y=\{x+y:y\in Y\}## contains ##x##, since ##0\in Y##.
 
  • #25
Wait, so in the example I described in post #22, would it be like:

##\pi:(x,y)+p\mapsto (x,y)+p+Y##
 
  • #26
Eclair_de_XII said:
Wait, so in the example I described in post #22, would it be like:

##\pi:(x,y)+p\mapsto (x,y)+p+Y##
Doesn't ##(x,y)## refer to an arbitrary point in ##X = \mathbb R^2##? If so, then ##\pi(x,y)## would simply be ##(x,y) + Y##. In other words, ##\pi## maps each point of ##X## to the coset that contains that point.

Note that if ##p \in Y##, then ##p+Y = Y##, so ##(x,y) + p + Y = (x,y) + Y##.
 
  • #27
jbunniii said:
In other words, ##\pi## maps each point of ##X## to the coset that contains that point.

Oh, I think I understand why ##\varphi## is injective. In the case of ##X=\mathbb{R}^2## and ##Y=\textrm{graph of }y=x##, ##\varphi## maps two parallel lines (cosets ##x_1+Y,x_2+Y##) to the same point (vector ##x##). Those lines must be equal, since if these two lines were distinct, they could not contain the same point, since they never intersect.

It's like these cosets are equivalence classes, in the respect that they are either disjoint or identical.
 

FAQ: Cartesian sum of subspace and quotient space isomorphic to whole space

What is the Cartesian sum of a subspace and quotient space?

The Cartesian sum of a subspace and quotient space is a mathematical operation that combines the elements of the two spaces to create a new space. It is denoted by A + B, where A is the subspace and B is the quotient space. This operation can also be thought of as the direct sum of two vector spaces.

How is the Cartesian sum related to the whole space?

The Cartesian sum of a subspace and quotient space is isomorphic to the whole space, meaning that the resulting space has the same structure and properties as the original space. This is because the direct sum operation preserves the linear independence and span of the original spaces, making it a useful tool in linear algebra.

What are some applications of the Cartesian sum in science?

The Cartesian sum has various applications in science, particularly in fields such as physics, engineering, and computer science. In physics, it is used to describe the combination of two or more physical systems, while in engineering it is used to model the behavior of complex systems. In computer science, it is used in algorithms for data compression and error correction.

How is the Cartesian sum different from the Cartesian product?

The Cartesian sum and Cartesian product are both mathematical operations that combine elements from two spaces to create a new space. However, the Cartesian sum combines elements from different spaces, while the Cartesian product combines elements from the same space. Additionally, the Cartesian sum results in a new space with the same dimension as the original spaces, while the Cartesian product results in a new space with a higher dimension.

Can the Cartesian sum be extended to more than two spaces?

Yes, the Cartesian sum can be extended to any number of spaces. This is known as the direct sum of multiple vector spaces. The resulting space will have the same dimension as the sum of the dimensions of the individual spaces. This extension is useful in many areas of mathematics, including abstract algebra and functional analysis.

Back
Top