Proof of Existence of Tensor Product .... Cooperstein ....

In summary: In your case, you can use the UMP to easily define a linear map from the set of all (m-tuples of) real numbers to the vector space of all real linear functions.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Bruce N. Coopersteins book: Advanced Linear Algebra (Second Edition) ... ...

I am focused on Section 10.1 Introduction to Tensor Products ... ...

I need help with the proof of Theorem 10.1 on the existence of a tensor product ... ...Theorem 10.1 reads as follows:
?temp_hash=20ec3d495caf343548fc8178f80d7c9c.png

In the above text we read the following:

" ... ... Because we are in the vector space [itex]Z[/itex], we can take scalar multiples of these objects and add them formally. So for example, if [itex]v_i , v'_i \ , \ 1 \leq i \leq m[/itex], then there is an element [itex](v_1, \ ... \ , \ v_m ) + (v'_1, \ ... \ , \ v'_m )[/itex] in [itex]Z[/itex] ... ... "So it seems that the elements of the vector space [itex]Z[/itex] are of the form [itex](v_1, \ ... \ , \ v_m )[/itex] ... ... the same as the elements of [itex]X[/itex] ... that is [itex]m[/itex]-tuples ... except that [itex]Z[/itex] is a vector space, not just a set so that we can add them and multiply elements by a scalar from [itex]\mathbb{F}[/itex] ... ...

... ... BUT ... ...

... earlier in 10.1 when talking about a UMP ... Cooperstein discussed a vector space [itex]V[/itex] based on a set [itex]X[/itex] and defined [itex]\lambda_x[/itex] to be a map from [itex]X[/itex] to [itex]\mathbb{F}[/itex] such that

[itex]\lambda_x (y) = 1[/itex] if [itex]y = x[/itex] and [itex]0[/itex] otherwise ...

Then [itex]i \ : \ X \longrightarrow V[/itex] was defined by [itex]i(x) = \lambda_x[/itex]

... as in the Cooperstein text at the beginning of Section 10.1 ...

The relevant text from Cooperstein reads as follows:
?temp_hash=20ec3d495caf343548fc8178f80d7c9c.png

?temp_hash=20ec3d495caf343548fc8178f80d7c9c.png
 

Attachments

  • Cooperstein - Theorem 10.1 - and beginning of proof          ... .png
    Cooperstein - Theorem 10.1 - and beginning of proof ... .png
    34.5 KB · Views: 655
  • Cooperstein - 1 - UMP - Section 10.1 - PART 1       ... ...     .png
    Cooperstein - 1 - UMP - Section 10.1 - PART 1 ... ... .png
    41.4 KB · Views: 639
  • Cooperstein - 2 - UMP - Section 10.1 - PART 2       ... ...     .png
    Cooperstein - 2 - UMP - Section 10.1 - PART 2 ... ... .png
    39.8 KB · Views: 696
Physics news on Phys.org
  • #2
Can you explain what your difficulty is with this text? You haven't asked an explicit question.
 
  • #3
Sorry Andrew ... explicit question is as follows:

So ... given the construction and the definitions in the text directly above from the beginning of Section 10.1 ... and comparing this with Theorem 10.1 ... it appears that in the case of the beginning of Theorem 10.1 where [itex]Z[/itex] takes the place of [itex]V[/itex], the elements of [itex]Z[/itex] should be of the form [itex]\lambda_x[/itex] ... not of the form [itex](v_1, \ ... \ , \ v_m )[/itex] ... ... ?Can someone please clarify the nature of the elements of [itex]Z[/itex] ... are they of the same form as the elements of [itex]X[/itex] ... that is m-tuples ... or are they of the form [itex]\lambda_x[/itex] ... ... ?

Hope someone can help ... ...

Peter
 
  • #4
The answer is in the second sentence of the proof, where it says 'we identify each element of ##X## with ##\chi_x##' (by the way, you wrote ##\lambda_x## above where I think you meant ##\chi_x## - chi, not lambda).
That means the author is going to pretend that they are the same thing, because it doesn't make any difference to the validity of the operations they will perform.

Personally I don't like identifications like this. I find it sloppy, lazy and it generates unnecessary confusion, as it has done with you. In my experience it can nearly always be avoided without any significant extra work.

By the way, this is a very complex presentation of tensors, which is completely unnecessary unless you want to understand them from a Category Theory point of view. The reason for all his weird definitions seems to be because he wants to fit tensors and vector spaces into a Category Theoretic framework. If you don't feel the need to do that, just go with a non-categoric presentation of tensors and vector spaces. It's just as rigorous. Vector spaces are very simple objects, and tensor spaces are not that much more complicated as long as you understand the notion of quotients.

Did you know that one of the names used by mathematicians for Category Theory is General Abstract Nonsense? That's not to say that it's not sometimes useful. It has some quite practical uses in Algebraic Topology. But I can see no practical use for it in linear algebra or differential geometry.
 
  • Like
Likes suremarc, Samy_A and Math Amateur
  • #5
Thanks for your help, Andrew ... including my careless mistake over [itex] \chi_x [/itex] and [itex] \lambda_x [/itex] ... ...

Thanks also for the bit on identifications ... most helpful ...

I would like to understand category theory ... but find its high level of abstraction quite daunting ... but it is a neat way of thinking about things ... will probably try a category oriented and a non-category approach ... mind you, you are largely correct ... I will probably attain my first real understanding of tensors through a non-category theory approach ...

Note that it is usually only after getting a good basic non-category view of something that I can follow the category approach ...

I have certainly heard the "abstract nonsense" claim ... :smile:..

Thanks again,

Peter
 
  • #6
the main idea is understanding what a basis for a vector space does for you. namely a basis is a set that let's you easily define linear maps on your whole vector space, just by defining the map on the basis. I.e. you can define your map on the basis any way you like, and then it will always extend linearly in one and only one way to the whole space.

now if you know that, and you pose for yourself the problem of constructing a space T whose linear maps correspond to bilinear maps on VxW, the first step is to form the vector space Z whose basis is VxW. Then any map out of VxW, bilinear or not, will extend uniquely to a linear map on Z. So we have then this big space Z and an injection VxW-->U, such that, given any map out of VxW, there is a unique linear map out of Z, that restricts to the original map on the subset VxW.

The second step is to modify Z so that not all maps on VxW will extend, but only bilinear maps will extend. The trick is to look at which vectors in Z get sent to zero by linear maps on Z which extend bilinear maps on VxW. This is a big subspace of Z we will call U. Now if we mod Z out by U, we get a quotient space T= Z/U, such that linear maps out of T correspond to those linear maps out of Z that send U to zero. We carefully cook up this U so that sending U to zero only happens for maps that were bilinear on VxW. Then this T = Z/U behaves like we want a tensor product to behave.

In general, we want a basis to be a subset of the space, but for technical reasons, when we are building new spaces, the objects in them are new objects and so our desired subsets are not really subsets. but this does not matter, we just define an injective map and think of it as a substitute for the inclusion map of a subset. This is what all this "identify that gadget with this one" language is for.
 
  • Like
Likes Math Amateur
  • #7
Thanks for a really helpful post ... such an overview is a great help ...

Still reflecting on what you have written ...

Peter
 
  • #8
i am very grateful for your feedback. you may have no idea how often i answer questions and am left hanging by the questioner. peace.
 
  • Like
Likes Math Amateur
  • #9
I really think this very abstract appropach is somewhat overkill for the very simple case of vector spaces. Essentially the exact same thing, but easier to grasp, is just to say that the tensor product of two vector spaces V,W consists of all linear combinations of simple products axb where a is a vector in V and b is a vector in W. of course that product sign x should have a little circle around it.

But then you add in that certain linear combinations should be equal, just the obvious ones. namely (a1+a2)xb should equal a1xb + a2xb, and (ta)xb should equal t.(axb) and also ax(tb) where t is a scalar. same for ax(b1+b2) and axb1 + axb2. that's it.

even more concrete, if v1,...vn and w1,...,wm are bases of V,W then all products of those, namely { vixwj } is a basis of the tensor product. Thats really exactly what the abstract construction says. I.e. we started with a space Z whose basis was VxW, i.e. the basis was all simple products axb with a in V and b in W. But then we didn't really want them all to be independent, so we modded out by a big collection U of special linear combinations that we wanted to equal zero, namely stuff like (a1+a2)xb - a1xb - a2xb, and so on. that's the same as our explicit description above, except harder to understand.

then you have forced the map from VxW to Z/U taking (a,b) to axb, to be bilinear, and its pretty obvious how to factor any bilinear map f:VxW-->M through a map Z/U-->M which turns out to be linear. namely just send axb to f(a,b) I guess, what else? and extend linearly.
 
  • Like
Likes Math Amateur, andrewkirk, suremarc and 1 other person
  • #10
The other presentation of tensors that I find very easy to understand, and doesn't require an understanding of quotients, is as multilinear maps.

Step 1 is to define the dual space ##V^*## of a vector space ##V## over field ##F##, as the set of all linear maps from ##V## to ##F##. We can easily show ##V^*## to be a vector space over ##F##. The elements of ##V^*## are called one-forms over ##V##. The vector space ##(V^*)^*## is usually identified with ##V## by defining an action of a vector in ##V## on a vector in ##V^*##, but we won't do that here, to keep things simple.

Step 2 is to define a ##(m\ n)## tensor over ##V## to be a multilinear map from ##V^m\times (V^*)^n## to ##F##. We note that, under this definition, one-forms in ##(V^*)^*## and ##V^*## are ##(0\ 1)## tensors and ##(1\ 0)## tensors respectively. It's important to also note that the ##\times## symbol and the power superscripts here denote Cartesian Products, not Tensor Products (which we have not yet defined, and which will be different from Cartesian products).

Step 3 is to define the tensor product of ##(m\ n)## tensor ##T## and ##(r\ s)## tensor ##S## to be the map ##T\otimes S:V^{m+r}\times(V^*)^{n+s}\to F## such that

$$(T\otimes S)(\vec v_1,...,\vec v_{m+r},\tilde u_1,...,\tilde u_{n+s})=
T(\vec v_1,...,\vec v_{m},\tilde u_1,...,\tilde u_{n})\cdot
S(\vec v_{m+1},...,\vec v_{m+r},\tilde u_{n+1},...,\tilde u_{n+s})$$

where we use arrows overhead to indicate elements of ##V## and tildes overhead to indicate elements of ##V^*##.

Step 4 is to show that the map ##T\otimes S## is multilinear, and hence is a ##(m+r\ n+s)## tensor.

This presentation lends itself to a coordinate-free (basis-free) understanding of tensors, because it does not even mention bases or coordinates.
 
Last edited:
  • Like
Likes Math Amateur
  • #11
Thanks so much Andrew ... does seem a straightforward and good approach ... !

Do you know a text(s) or set of online notes that uses that approach ... ?

Peter
 
  • #12
mathwonk said:
I really think this very abstract appropach is somewhat overkill for the very simple case of vector spaces. Essentially the exact same thing, but easier to grasp, is just to say that the tensor product of two vector spaces V,W consists of all linear combinations of simple products axb where a is a vector in V and b is a vector in W. of course that product sign x should have a little circle around it.

But then you add in that certain linear combinations should be equal, just the obvious ones. namely (a1+a2)xb should equal a1xb + a2xb, and (ta)xb should equal t.(axb) and also ax(tb) where t is a scalar. same for ax(b1+b2) and axb1 + axb2. that's it.

even more concrete, if v1,...vn and w1,...,wm are bases of V,W then all products of those, namely { vixwj } is a basis of the tensor product. Thats really exactly what the abstract construction says. I.e. we started with a space Z whose basis was VxW, i.e. the basis was all simple products axb with a in V and b in W. But then we didn't really want them all to be independent, so we modded out by a big collection U of special linear combinations that we wanted to equal zero, namely stuff like (a1+a2)xb - a1xb - a2xb, and so on. that's the same as our explicit description above, except harder to understand.

then you have forced the map from VxW to Z/U taking (a,b) to axb, to be bilinear, and its pretty obvious how to factor any bilinear map f:VxW-->M through a map Z/U-->M which turns out to be linear. namely just send axb to f(a,b) I guess, what else? and extend linearly.
Thanks mathwonk ... helped me to get the broad picture ...

Appreciate your help ...

What are your favourite texts and references for tensors ...

Peter
 
  • #13
a remark. andrewkirks approach seems easier, but it comes at the cost of not really defining the tensor product but of defining instead the dual of the tensor product. hence to get the actual tensor product you have to dualize his definition using the fact he mentions that the dual of the dual of an object is naturally isomorphic to the original object.

i.e. the purpose of defining a tensor product of V and W is to obtain a space T such that linear maps on T are equivalent to bilinear maps on VxW. So you can come at it backwards and just define the tensor product to be the dual of the bilinear maps on VxW, i.e. define T as (Bil(VxW))*. Then at least scalar valued linear maps on T will be by definition the dual of (Bil(VxW))*, i.e. then real valued linear maps on T will be (Bil(VxW))**, which is isomorphic to (Bil(VxW)), as desired. But this requires us to use the double dual isomorphism andrewkirk wanted to avoid. And it still requires one to check that linear maps from this object to any vector space, correspond to bilinear maps from VxW to that space.

However, this is not meant as criticism, just clarification. Thanks for the post andrewkirk.
 
Last edited:
  • #14
Thanks mathwonk ... I am using your posts and andrew's as an essential overview to guide my detailed study of tensor products in several texts ... so thanks for the heads up in the above ...

Do you have any favourite books regarding the development of the tensor product ...

Peter
 
  • #15
You're right mathwonk that the tensor product I have defined differs from the usual definition in that it does not cover the case of tensor products between vectors, instead defining for that case the product between elements of ##V^{**}##. We can deal with that by instead defining the tensor product operator ##\otimes## as a map from ##\left(V\cup \mathscr{T}(V)\right)\times \left(V\cup \mathscr{T}(V)\right)## to ##\mathscr{T}(V)##, where ##\mathscr{T}(V)## is the set of all tensors over ##V##, of any contra and covariant degrees. The value of ##a\otimes b## is ##\zeta(a)\otimes_{old}\zeta(b)## where ##\otimes_{old}## is the tensor product defined in my earlier post and ##\zeta:V\cup \mathscr{T}(V)\to \mathscr{T}(V)## maps ##a## to ##a## if ##a## is already a tensor, otherwise it maps ##a## to ##a^{**}##.

The tensor product thus defined operates on tensors and-or members of the base vector space ##V##, as required.

I've cheated there in that I have used the notion of a dual-dual vector ##a^{**}##, without defining a dual vector. We've defined dual spaces, but not how to find the dual of a specific vector. If the space ##V## has an inner product, then we can unambiguously define the dual of a specific vector ##\vec v## as the one-form(which can be shown to be unique) in the dual space ##V^*## that gives 1 when applied to ##\vec v##, and 0 when applied to any vector that is orthogonal to ##\vec v##.

Do you think that's right, that it's impossible to uniquely define the dual of a vector without an inner product?

I suspect that, if the vector space does not have an inner product, and cannot be equipped with a unique one (which we can do if the space is finite dimensional and the field ##F## is complete, but not necessarily otherwise) then it may not be possible to unambiguously define the dual of a vector. If so, then we would have to switch to the quotient-based definition for those cases. Fortunately, the most important vector spaces used in physics of which I am aware, being Euclidean space, the tangent spaces to the spacetime manifold and the Hilbert Spaces of quantum mechanics, all have inner products.
 
  • #16
andrewkirk said:
I suspect that, if the vector space does not have an inner product, and cannot be equipped with a unique one.

What does it mean for a vector space to be equipped with a unique inner product? Even Euclidean spaces can be equipped with multiple inner products.

And what does it mean for a field to be complete?
 
  • #17
micromass said:
Even Euclidean spaces can be equipped with multiple inner products.
Interesting, I wasn't aware of that. Do different inner products on a finite dimensional Euclidean space (over ##\mathbb{R}##) lead to different maps from a vector to its dual?
And what does it mean for a field to be complete?
Just shorthand. I mean metrically complete, ie, has a metric, and Cauchy sequences converge.
 
  • #18
andrewkirk said:
Interesting, I wasn't aware of that. Do different inner products on a finite dimensional Euclidean space (over ##\mathbb{R}##) lead to different maps from a vector to its dual?

Yes. Other inner products aren't difficult to construct. Take ##\mathbf{x}## and ##\mathbf{y}## and a positive definite matrix ##A##, then we can form ##\mathbf{x}^TA\mathbf{y}##. They are all isomorphic of course.

Just shorthand. I mean metrically complete, ie, has a metric, and Cauchy sequences converge.

That would make all finite fields complete with the discrete metric, but these do not have any inner products. In fact, I don't know any sensible definition of an inner product over fields other than ##\mathbb{R}## or ##\mathbb{C}##.
 
  • #19
micromass said:
Yes. Other inner products aren't difficult to construct. Take ##\mathbf{x}## and ##\mathbf{y}## and a positive definite matrix ##A##, then we can form ##\mathbf{x}^TA\mathbf{y}##. They are all isomorphic of course.
Right, so if my calcs are correct then, given an inner product ##A:V\times V\to F##, we define a map ##B_A:V\to V^*## by ##(B_A(\vec v))(\vec u) =A(\vec v,\vec u)##. So the dual vector of ##\vec v## is ##\vec v^*\equiv B_A(\vec v)##. The map from vector to dual is determined by the inner product, which has to be specified, because vector spaces that admit inner products admit multiple different ones, which would give different such maps.

Hence, the extension of post #15 looks like it won't work for vector spaces that do not have a specified inner product. The quotient approach would need to be used for them.
That would make all finite fields complete with the discrete metric, but these do not have any inner products.
Good point. I see wiki says that one requirement for an inner product to be possible is that the field have an ordered subfield. I suppose ##\mathbb{R}## fills that subfield role for ##\mathbb{C}##. I think there are other requirements as well, but couldn't find a clear statement of what they are. The only fields I can see mentioned are ##\mathbb{R}## and ##\mathbb{C}##, or quadratically closed subfields thereof. Metric completeness is not necessary, contrary to my earlier suggestion. I wonder if it is absolutely the case that subfields of ##\mathbb{C}## and ##\mathbb{R}## (or isomorphs thereof) are the only fields over which a vector space can have an inner product and, if so, whether anybody has proven it.
 
  • #20
... thanks to Andrew, mathwonk and micromass for your previous posts on this topic ... I have been doing a lot of reflecting on what you have said ... but still have some issues ... hope you can clarify things further ... especially the representation of elements of \(\displaystyle V\) ... I wish to pursue further Andrew's point about identifying each element [itex] x \in X [/itex] with [itex] \chi_x [/itex]

Although I now understand that an m-tuple is a function ... I am still unsure about what is going on in Cooperstein's move in Theorem 10.1 where he considers the elements of the vector space [itex]Z[/itex] to be of the form [itex]( v_1, v_2, \ ... \ ... \ , \ v_m )[/itex] ... ...

Now Cooperstein defines [itex]V[/itex] by [itex]V = \mathcal{M}_{ fin } ( X, \mathbb{F} )[/itex] ... ...

So [itex]V[/itex] is the set of all functions [itex]f \ : \ X \longrightarrow \mathbb{F}[/itex] such that the support of [itex]f[/itex] is finite ... ...

Cooperstein defines [itex]\iota \ : \ X \longrightarrow V[/itex] by [itex]\iota (x) = \chi_x[/itex] ...

... and shows that [itex]\mathcal{B} \{ \chi_x | x \in X \}[/itex] is a basis for V ...So an element of [itex]V[/itex] would be (I think ... am I correct?)

[itex]f = c_1 \chi_{x_1} + c_2 \chi_{x_2} + \ ... \ ... \ , \ c_m \chi_{x_m} [/itex]and another element would be[itex]f = c'_1 \chi_{x'_1} + c'_2 \chi_{x'_2} + \ ... \ ... \ , \ c'_n \chi_{x'_n} [/itex]and we could formally add these so

[itex]f + f' =[/itex] [itex] ( c_1 \chi_{x_1} + c_2 \chi_{x_2} + \ ... \ ... \ , \ c_m \chi_{x_m} ) [/itex] + [itex] ( c'_1 \chi_{x'_1} + c'_2 \chi_{x'_2} + \ ... \ ... \ , \ c'_n \chi_{x'_n} ) [/itex]Is that right?... ... BUT ... ... in the proof of Theorem 10.1 Cooperstein writes the elements of V as [itex]( v_1, v_2, \ ... \ ... \ , \ v_m )[/itex] and [itex]( v'_1, v'_2, \ ... \ ... \ , \ v'_m )[/itex] ... ...

... ... ? ... ... is this just a convenient way to write [itex]f[/itex] and [itex]f'[/itex] ... ? ... if so, is it a problem that f and f' may have a different number of terms in their sums due to different supports ...

How is Coopersteins terminology of [itex]( v_1, v_2, \ ... \ ... \ , \ v_m )[/itex] for an element of V justified?

Hope someone can help ...

Peter

===========================================================For the convenience of readers of this post I am providing the text of Cooperstein's introduction to Section 10.1: Introduction to Tensor Products ... ... the text will include the statement of Theorem 10.1 and the start of the proof ... ... as follows:
?temp_hash=81fc83259db3bb4f345750a3401deff6.png

?temp_hash=81fc83259db3bb4f345750a3401deff6.png

?temp_hash=81fc83259db3bb4f345750a3401deff6.png

?temp_hash=81fc83259db3bb4f345750a3401deff6.png
 

Attachments

  • Cooperstein - 1 - Section 10.1 - PART 1     ....png
    Cooperstein - 1 - Section 10.1 - PART 1 ....png
    70.8 KB · Views: 571
  • Cooperstein - 2 - Section 10.1 - PART 2     ....png
    Cooperstein - 2 - Section 10.1 - PART 2 ....png
    38.4 KB · Views: 567
  • Cooperstein - 3 - Section 10.1 - PART 3     ....png
    Cooperstein - 3 - Section 10.1 - PART 3 ....png
    35.3 KB · Views: 592
  • Cooperstein - 4 - Section 10.1 - PART 4     ....png
    Cooperstein - 4 - Section 10.1 - PART 4 ....png
    33.1 KB · Views: 586
  • #21
Since the last posts in this thread I've been doing a little exploring the alternative definitions of tensors and tensor products, as:
- quotients of a vector space of formal sums; or
- multilinear maps.

I have discovered a few interesting things along the way, including the realisation that my guess in post 15 that one needs an inner product in order to identify a canonical isomorphism between a vector space and the dual of its dual was incorrect. It turns out that, although one cannot uniquely identify the dual of a vector without an inner product, one can uniquely identify the dual of the dual of a vector.

As I've wandered through various considerations and possibilities, I find myself becoming increasingly convinced that the multilinear map approach is preferable. Not only is it easier to understand, but it also avoids the need to be constantly identifying different isomorphic objects in order to prevent one's equations becoming - strictly speaking - meaningless.

I think the quotient approach is very rewarding as a means of providing an additional perspective, once one has already developed a sound understanding of tensors. It's always good to have more than one way of looking at something. It may also be the case that the quotient approach provides a better base for launching into the study of symmetric and antisymmetric algebras. But I'm struggling to find an advantage of the quotient approach over the multilinear map approach other than those two.

Perhaps somebody prefers the quotient approach can suggest some aspects of it that they find preferable. @mathwonk I mulled for some time over your comment
mathwonk said:
[the multilinear map definition] comes at the cost of not really defining the tensor product but of defining instead the dual of the tensor product. hence to get the actual tensor product you have to dualize his definition
While at first persuaded by this, I decided after reflection that whether this criticism stands really depends on what we want to call the 'tensor product'. If we say it is the quotient then of course the multilinear map approach doesn't define ##V\otimes W## to be that object, but instead an isomorphic image of it. But equally, if we define the tensor product to be the multilinear map, the quotient approach doesn't define ##V\otimes W## to be the tensor product but an isomorphic image of it. So, whichever approach we take, we need to apply an isomorphism to get from the object defined in that approach as 'tensor product' to the object defined in the other. Hence, from that point of view, the two approaches rank equally. Given that they rank equally in that regard, the following considerations lead me to favour the multilinear map approach for a primary definition of tensors and tensor products:
  1. easier to understand
  2. does not require identifications and related abuses of notation
  3. makes explicit the difference between ##v## and ##v^{**}##, consistent with the fact that the first is an element of ##V## and the second is a subset of ##V^*\times F##
  4. avoids confusion about whether the inputs to a tensor product are vectors or tensors. In the multilinear map approach, they are always tensors. In the quotient approach, it is typically defined as between two vectors, which then can lead to unnecessary complexity when we want to multiply a ##(m\ n)## tensor over ##V## by a ##(r\ s)## tensor over ##V##.
But I suspect I'm only considering one side of the issue. I'd be grateful for people pointing out any advantages of the quotient approach or disadvantages of the multilinear map approach that I have missed.

Thanks

Andrew
 
  • Like
Likes Math Amateur
  • #22
Thanks for the post Andrew ... interesting but challenging ...

Still reflecting on what you have said ...

Peter
 
  • #23
many people prefer the multilinear map approach, in the case of finite dimensional vector spaces over a field, because it is easier to define, even though it gives the dual of the dual of the "correct" thing, which is of course naturally isomorphic to the correct thing. however there are reasons to want to understand the concept of tensors and of multilinear maps in settings where the scalars are not chosen form a field but only from a ring, and possibly a ring that is not a domain. in this setting the dual of the dual may not be isomorphic to the original space. in fact even over a field the dual of the dual is not the original space except in finite dimensions. so in settings where infinite dimensional vector spaces are needed, or even finitely generated modules defined over a more general ring, one needs to use the direct approach via a quotient of sums, rather than double dualizing.
 
  • Like
Likes Math Amateur
  • #24
mathwonk said:
in fact even over a field the dual of the dual is not the original space except in finite dimensions
I believe I have a proof that it is (isomorphic, that is). At least, I did a proof that I don't think assumed anywhere that the underlying vector space is finite. I'll check that and post it if it's correct.

Re the scalars being chosen from a ring - that would make the space a module rather than a vector space. I wasn't aware that tensor-equivalents were used with modules. Do they turn out to be useful / interesting?
 
  • Like
Likes Math Amateur
  • #25
the natural map from the space to its double dual is injective even in infinite dimensions but unfortunately that does not imply subjectivity in that case.
 
  • #26
Here's my proof that ##V\cong V^*##, and hence that ##V\cong V^{**}##. It doesn't assume finite dimensionality or the existence of an inner product. It does assume the existence of a basis of ##V##, which requires the assumption of Zorn's Lemma (Axiom of Choice).

Surjectivity is proven towards the end of p2.

It's quite possible that it contains a mistake, such as a hidden, unjustified assumption. If so, I'd be grateful to anybody that can point it out.

Here's a link to the latex code, to make it easier to comment on it.
 
  • #27
Question on this:

tensor.jpg


Is it obvious that ##\tilde q## is well defined? Don't all but a finite number of the ##\tilde p(\vec e_\beta)## have to be 0?
 
  • Like
Likes andrewkirk
  • #28
"Here's my proof that V V*. It doesn't assume finite dimensionality ..."​

This cannot be true, because for example consider a countable-dimensional vector space V over the field ℚ of rational numbers. This is in one-to-one correspondence with all countable sequences of rational numbers only finitely many of which are nonzero. It is not difficult to show, therefore, that the cardinality of V is

card(V) = aleph0,​

which is the same cardinality as that of ℚ itself.

(((
Note that a basis B for

V = ℚ ⊕ ℚ ⊕ ℚ ⊕ ...​

is the set of those vectors of the form

(0, 0, ..., 0, 1, 0, ..., 0, ...),​

in other words those vectors having a 1 as the jth component for some finite j, and all the other components equal to 0.
)))

Now consider the dual vector space V*. An element of V* is determined uniquely by its value on each basis element in B. Furthermore, any such choice of values for each element of B determines uniquely an element of V*.

But how many such choices are there? There are aleph0 for the first component, aleph0 for the second component, and in fact aleph0 for the jth component, for each j = 1, 2, 3, ..., n, ...

Thus we have

card(V*) = card(ℚ) × card(ℚ) × card(ℚ) × ... × card(ℚ) × ...
= card(ℚ)aleph0
= aleph0aleph0

and so

card(V*) ≥ 2aleph0.

But the right-hand term is well-known, by Cantor's diagonal argument, to be of a strictly greater cardinality than aleph0 = card(V). Hence:

card(V*) > card(V).

For this reason, it is impossible for the vector spaces V and V* to be isomorphic in this case.

Incidentally, essentially the same argument works no matter what field the vector space is over, as long as the dimension of the vector space is infinite: An infinite-dimensional vector space is never the same cardinality as its dual, and hence cannot be isomorphic to it. (See https://en.wikipedia.org/wiki/Dual_space#Infinite-dimensional_case.)
 
  • #29
Samy_A said:
Is it obvious that ##\tilde q## is well defined? Don't all but a finite number of the ##\tilde p(\vec e_\beta)## have to be 0?
By Jove, that's well spotted. What a show-stopper! Some things seem obvious after they've been pointed out but, even after your having pointed that one out, I can't imagine that I would have ever noticed it otherwise.
I suppose that's it for isomorphism then.
 

Related to Proof of Existence of Tensor Product .... Cooperstein ....

1. What is the tensor product?

The tensor product is a mathematical operation that combines two vector spaces to create a new vector space. It is denoted by the symbol ⊗ and is used to describe the relationship between two vectors or sets of vectors.

2. How is the tensor product used in science?

The tensor product is used in various scientific fields, including physics, engineering, and computer science. It is used to describe the relationship between different physical quantities, such as force and acceleration, and to model complex systems and processes.

3. What is the importance of proving the existence of tensor product?

The proof of existence of the tensor product is important because it provides a rigorous mathematical foundation for its use in various scientific fields. It also allows for the development of more advanced theories and models that rely on the concept of the tensor product.

4. Who is Cooperstein and what is their contribution to the proof of existence of tensor product?

Cooperstein is a mathematician who made significant contributions to the field of linear algebra, including the proof of existence of the tensor product. His work has helped to further our understanding of vector spaces and their relationships.

5. Are there any real-world applications of the tensor product?

Yes, there are many real-world applications of the tensor product. It is used in physics to describe the relationship between different physical quantities, in engineering to model complex systems, and in computer science for tasks such as image recognition and data compression.

Similar threads

  • Linear and Abstract Algebra
Replies
13
Views
3K
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
7
Views
477
  • Linear and Abstract Algebra
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
839
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
10
Views
546
Back
Top