Isomorphisms have the same dimensions

  • Thread starter E'lir Kramer
  • Start date
  • Tags
    Dimensions
In summary: I'm just so used to thinking of "linear transformations" as things that are defined by matrices that I forgot that "linear transformation" is a more general concept.In summary, the content discusses the concept of isomorphic vector spaces and a proof that two finite-dimensional vector spaces are isomorphic if and only if they have the same dimension. The conversation also touches on the difficulty of proving the sufficiency and necessity of this criterion, as well as the use of matrix multiplication and the concept of linear transformations. The conversation concludes with a discussion of a related theorem and the difficulty in understanding its proof, but ultimately finding the missing piece to complete it.
  • #1
E'lir Kramer
74
0
Advanced Calculus of Several Variables, 5.6:

Two vector spaces V and W are called isomorphic if and only if there exist linear mappings [itex] S : V \to W[/itex] and [itex]T : W \to V[/itex] such that [itex]S \circ T[/itex] and [itex] T \circ S [/itex] are the identity mappings of [itex]S[/itex] and [itex]W[/itex] respectively. Prove that two finite-dimensional vector spaces are isomorphic if and only if they have the same dimension.

I'm having a hard time proving the problem formally. I'm also having a hard time proving sufficiency of the criterion. In terms of necessity, I have this much:

It's clear that if one space, say V, has more dimensions than the other, then V which all map to the same value of W. For instance, [itex] v \in V : <v,w> = 0 [/itex] for all [itex]w \in W [/itex] all map to 0 in W (by the positivity property of inner products). There is then no function W -> V that can reclaim the lost information.

Formally, for all [itex] S : V -> W, S(v) = 0 [/itex] when v is orthogonal to every vector in W.

Consider two vectors [itex]v_{1}[/itex] and [itex]v_{2}[/itex] which are both orthogonal to every vector in W.

Then, for example, [itex]S(v_{1}) = S(v_{2}) = 0 \in W[/itex], and we have [itex] (T \circ S) (v_{1}) = (T(S(v_{1})) = T(S(v_{2})) = T(0). [/itex]

In order for the isomorphism to exist, [itex] (T \circ S) (v_{1}) = v_{1} [/itex] and [itex] (T \circ S) (v_{2}) = v_{2} [/itex]. But this is impossible since [itex](T \circ S) (v_{2}) = (T \circ S) (v_{1})[/itex], and [itex]v_{1} ≠ v_{2}[/itex]

Though I am satisfied by this, I have the feeling that I'm supposed to show this using matrix multiplication. The formalism that is eluding me is hiding in the phrase "The identity mappings of S and W respectively". What does that mean? [itex]S \circ T = I_{w}[/itex]? What does that mean?
 
Last edited:
Physics news on Phys.org
  • #2
Are you allowed to assume a given inner product on the spaces? That does not appear in the statement.

I would think that the simplest way to do this would be to show that if [itex]\{v_1, v_2, ..., v_n\}[/itex] is a basis for V then [itex]\{S(v_1), S(v_2), ..., S(v_n)\}[/itex] is a basis for W.
 
  • #3
Earlier in the chapater, a theorem is given that:

"Let [itex] L : V \to W [/itex] be linear, with V being n-dimensional. If Ker L = 0, then L is one-to-one, and I am L is an n-dimensional subspace of W."

I had a hard time with the proof of this theorem, and the exact place where I had the difficulty is relevant to this proof.
The author writes that

"To show that the subspace I am L is n-dimensional, start with a basis [itex]v_{1}, ... v_{n}[/itex] for V. Since it is clear, (by the linearly of L) that the vectors [itex]L(v_{1}), ..., L(v_{n}) [/itex] generate a basis for I am L, it suffices to prove that they are linearly independent..."

To me, it isn't clear at all how the linearity of L makes [itex]L(v_{1}), ..., L(v_{n}) [/itex] a basis for I am L. Well, intuitively, I'm not surprised. It seems pretty clear that an orthonormal basis becomes a basis for any linear transformation, but I can't provide a formalism for that. And moreover, I can't see how *any* basis does so, especially not formally. One of my goals in this reading is to get better at providing formal arguments for all of my proofs. I actually spent a few minutes trying to convince myself of the fact the first time I read this "proof" but never was able to do so. And now I've just spent another hour trying again. Could you explain it to me?
 
  • #4
E'lir Kramer said:
Earlier in the chapater, a theorem is given that:

"Let [itex] L : V \to W [/itex] be linear, with V being n-dimensional. If Ker L = 0, then L is one-to-one, and I am L is an n-dimensional subspace of W."

I had a hard time with the proof of this theorem, and the exact place where I had the difficulty is relevant to this proof.
The author writes that

"To show that the subspace I am L is n-dimensional, start with a basis [itex]v_{1}, ... v_{n}[/itex] for V. Since it is clear, (by the linearly of L) that the vectors [itex]L(v_{1}), ..., L(v_{n}) [/itex] generate a basis for I am L, it suffices to prove that they are linearly independent..."

To me, it isn't clear at all how the linearity of L makes [itex]L(v_{1}), ..., L(v_{n}) [/itex] a basis for I am L. Well, intuitively, I'm not surprised. It seems pretty clear that an orthonormal basis becomes a basis for any linear transformation, but I can't provide a formalism for that. And moreover, I can't see how *any* basis does so, especially not formally. One of my goals in this reading is to get better at providing formal arguments for all of my proofs. I actually spent a few minutes trying to convince myself of the fact the first time I read this "proof" but never was able to do so. And now I've just spent another hour trying again. Could you explain it to me?

Any element of I am L can be written as L(v) for some v in V. Since v1,v2,...,vn is a basis of V, v can be written as a1*v1+a2*v2+...+an*vn. If you work out L(a1*v1+a2*v2+...+an*vn) you should be able to show L(v1),L(v2),...,L(vn) must span I am L. Next, to finish showing it's a basis you need to show L(v1),L(v2),...,L(vn) are also linearly independent. Use the definition of linearly independent. It will be easiest to show that if you assume they are linearly dependent then that would contradict Ker L={0}. Try it!
 
  • #5
Thanks Dick, your remark that "Any element of I am L can be written as L(v) for some v in V" was what I was missing to prove that [itex]L(v_{1}), ... , L(v_{n})[/itex] spans W if [itex]v_{1}, ... , v_{n} [/itex] spans V. Because any vector in v can be written as [itex]a_{1}v_{1} + ... a_{n}v_{n}[/itex], then any vector in W can be written as [itex]L(a_{1}v_{1} + ... + a_{n}v_{n})[/itex]. By linearity of L, then, any vector in W can be written as [itex]a_{1}L(v_{1}) + ... a_{n}L(v_{n})[/itex], proving that [itex]L(v_{1}), ... , L(v_{n})[/itex] spans W.

Now if we have that [itex]v_{1}, ... , v_{n} [/itex] are a basis and Ker L = 0, it's easy to prove that [itex]L(v_{1}), ... , L(v_{n})[/itex] are also linearly independent and thus constitute a basis: but do I even have to do this for 5.6? I've shown that dim V = dim W is necessary in my first post and sufficient in this post. Why is it being a basis necessary?

PS: Ivy, you are correct in pointing out that an inner product is not defined in this problem statement, but we have that dim Ker V = dim V - dim I am V, so we know without an inner product that there will be infinitely many zeroes of V if dim V - dim I am V > 0, so the proof is easily mended.
 
Last edited:
  • #6
E'lir Kramer said:
Thanks Dick, your remark that "Any element of I am L can be written as L(v) for some v in V" was what I was missing to prove that [itex]L(v_{1}), ... , L(v_{n})[/itex] spans W if [itex]v_{1}, ... , v_{n} [/itex] spans V. Because any vector in v can be written as [itex]a_{1}v_{1} + ... a_{n}v_{n}[/itex], then any vector in W can be written as [itex]L(a_{1}v_{1} + ... + a_{n}v_{n})[/itex]. By linearity of L, then, any vector in W can be written as [itex]a_{1}L(v_{1}) + ... a_{n}L(v_{n})[/itex], proving that [itex]L(v_{1}), ... , L(v_{n})[/itex] spans W.

Now if we have that [itex]v_{1}, ... , v_{n} [/itex] are a basis and Ker L = 0, it's easy to prove that [itex]L(v_{1}), ... , L(v_{n})[/itex] are also linearly independent and thus constitute a basis: but do I even have to do this for 5.6? I've shown that dim V = dim W is necessary in my first post and sufficient in this post. Why is it being a basis necessary?

PS: Ivy, you are correct in pointing out that an inner product is not defined in this problem statement, but we have that dim Ker V = dim V - dim I am V, so we know without an inner product that there will be infinitely many zeroes of V if dim V - dim I am V > 0, so the proof is easily mended.

To show 5.6 using this theorem you don't have to repeat the whole thing. If you can show TS=ST=identity means S and T are one-to-one then it's easy from there.
 

FAQ: Isomorphisms have the same dimensions

What is an isomorphism?

An isomorphism is a mathematical concept that describes a relationship between two structures or objects that preserves their essential properties. In simpler terms, it is a bijective mapping between two sets that preserves the structure and operations of the sets.

What does it mean for two isomorphisms to have the same dimensions?

If two structures or objects are isomorphic, it means that they have the same dimensions. This means that they have the same number of elements and the same structure, even if the elements themselves may be different.

How can you prove that two isomorphisms have the same dimensions?

To prove that two isomorphisms have the same dimensions, you can use a proof by contradiction. Assume that the two structures have different dimensions, and then show that this leads to a contradiction. This will prove that the structures must have the same dimensions.

Why is it important that isomorphisms have the same dimensions?

Having the same dimensions is important for isomorphisms because it ensures that the structures or objects being compared are truly equivalent. If they have different dimensions, it means that they have different properties and cannot be considered isomorphic.

Can two structures be isomorphic even if they have different dimensions?

No, two structures cannot be isomorphic if they have different dimensions. Isomorphism is defined as a relationship between two structures with the same dimensions, so if the dimensions are different, the structures cannot be isomorphic.

Back
Top