Proof of Existence of Tensor Product .... Cooperstein ....

In summary, Bruce N. Cooperstein's book, Advanced Linear Algebra (Second Edition), discusses the concept of an $n$-tuple, formally defined as a function from a set to a vector space. This vector space can be based on any set, and the $n$-tuple refers to a specific triples consisting of the elements of the set represented by the function.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Bruce N. Coopersteins book: Advanced Linear Algebra (Second Edition) ... ...

I am focused on Section 10.1 Introduction to Tensor Products ... ...

I need help with the proof of Theorem 10.1 on the existence of a tensor product ... ... Theorem 10.1 reads as follows:View attachment 5383In the above text we read the following:

" ... ... Because we are in the vector space \(\displaystyle Z\), we can take scalar multiples of these objects and add them formally. So for example, if \(\displaystyle v_i , v'_i \ , \ 1 \leq i \leq m\), then there is an element \(\displaystyle (v_1, \ ... \ , \ v_m ) + (v'_1, \ ... \ , \ v'_m )\) in \(\displaystyle Z\) ... ... "


So it seems that the elements of the vector space \(\displaystyle Z\) are of the form \(\displaystyle (v_1, \ ... \ , \ v_m )\) ... ... the same as the elements of \(\displaystyle X\) ... that is \(\displaystyle m\)-tuples ... except that \(\displaystyle Z\) is a vector space, not just a set so that we can add them and multiply elements by a scalar from \(\displaystyle \mathbb{F}\) ... ...

... ... BUT ... ...

... earlier in 10.1 when talking about a UMP ... Cooperstein discussed a vector space \(\displaystyle V\) based on a set \(\displaystyle X\) and defined \(\displaystyle \lambda_x\) to be a map from \(\displaystyle X\) to \(\displaystyle \mathbb{F}\) such that

\(\displaystyle \lambda_x (y) = 1\) if \(\displaystyle y = x\) and \(\displaystyle 0\) otherwise ...

Then \(\displaystyle i \ : \ X \longrightarrow V\) was defined by \(\displaystyle i(x) = \lambda_x\)

... as in the Cooperstein text at the beginning of Section 10.1 ...

The relevant text from Cooperstein reads as follows:View attachment 5384
https://www.physicsforums.com/attachments/5385So ... given the construction and the definitions in the text directly above from the beginning of Section 10.1 ... and comparing this with Theorem 10.1 ... it appears that in the case of the beginning of Theorem 10.1 where \(\displaystyle Z\) takes the place of \(\displaystyle V\), the elements of \(\displaystyle Z\) should be of the form \(\displaystyle \lambda_x\) ... not of the form \(\displaystyle (v_1, \ ... \ , \ v_m )\) ... ... ?Can someone please clarify the nature of the elements of \(\displaystyle Z\) ... are they of the same form as the elements of \(\displaystyle X\) ... that is m-tuples ... or are they of the form \(\displaystyle \lambda_x\) ... ... ?

Hope someone can help ... ...

Peter
 
Last edited:
Physics news on Phys.org
  • #2
There are different ways authors discuss the question: "what is an $n$-tuple?"

In fact, some don't even discuss it at all, but rather take it as "obvious" one should know what an $n$-tuple is.

But formally, an $n$-tuple is a FUNCTION:

$f: \{1,2,\dots,n\} \to X$, where $X$ can be ANY SET.

So $f(j) \in X$, for each $j$, and it is common to represent the image of $j$ as $x_j$, and the ENTIRE FUNCTION $f$ as:

$(x_1,x_2,\dots,x_n)$.

Equivalently, for FINITE $n$, one can define an $n$-tuple as an element of the $n$-fold cartesian product:

$X \times X \times\cdots \times X$

The "indexed" version I gave first generalizes much better for infinite sets, because with "infinite tuples" it can be unclear how (or downright impossible) to list them as a linear array.

***********

Ok, so now let's talk about what "the vector space based on $X$" is. I will use a down-to-earth example.

Let $X = \{\text{Alice},\text{Bob},\text{Carol}\}$. We will suppose that this set refers to three honest-to-goodness real people. We would like to turn this set into a vector space.

Well, we have a problem: we can add vectors (they form an abelian group), but what the heck should:

$\text{Alice} + \text{Bob}$ even MEAN? Clearly, Alice and Bob aren't field elements, or group elements, or anything of the sort.

Well, we can use a clever trick computer programmers use; we create a Boolean function. This is nothing more than a function:

$f:X \to \{0,1\}$, where $f(x) = 1$ means "$x$ is true" and $f(x) = 0$ means "$x$" is false.

So we can create a function called:

"Are you Alice?". Such a function is called (because mathematicians love to make things really, really confusing) the characteristic function:

$\chi_{\text{Alice}}$.

We have three such functions, one for each person in our set.

Now we have a bijection:

$\phi:\{1,2,3\} \to \{\text{Alice},\text{Bob},\text{Carol}\}$. All this really says is we have three people, all different from each other.

In order to reduce the amount of typing I have to do, I am going to refer to these people henceforth as $A,B,C$. I hope this causes no confusion.

Now, in a feat of extraordinary mathematical sleight of hand, we can consider the following functions:

$\chi_A \circ \phi$
$\chi_B \circ \phi$
$\chi_C \circ \phi$.

Now these functions go from $\{1,2,3\}$ to the set $\{0,1\}$, so they are triples, namely:

$(1,0,0),(0,1,0),(0,0,1)$. If we map these to the standard basis vectors of $F^3$ (for any field $F$, which always has a 1 and a 0), we can now define:

$xA + yB + zC \leftrightarrow (x,y,z)$ and use the vector addition on $F^3$ to define a vector addition on a vector space with basis vectors Alice, Bob and Carol.

So Alice + Bob corresponds to $(1,1,0)$, which expressed in that basis remains simply "Alice + Bob", or if you prefer:

"one Alice and one Bob".
 
Last edited:
  • #3
Deveno said:
There are different ways authors discuss the question: "what is an $n$-tuple?"

In fact, some don't even discuss it at all, but rather take it as "obvious" one should know what an $n$-tuple is.

But formally, an $n$-tuple is a FUNCTION:

$f: \{1,2,\dots,n\} \to X$, where $X$ can be ANY SET.

So $f(j) \in X$, for each $j$, and it is common to represent the image of $j$ as $x_j$, and the ENTIRE FUNCTION $f$ as:

$(x_1,x_2,\dots,x_n)$.

Equivalently, for FINITE $n$, one can define an $n$-tuple as an element of the $n$-fold cartesian product:

$X \times X \times\cdots \times X$

The "indexed" version I gave first generalizes much better for infinite sets, because with "infinite tuples" it can be unclear how (or downright impossible) to list them as a linear array.

***********

Ok, so now let's talk about what "the vector space based on $X$" is. I will use a down-to-earth example.

Let $X = \{\text{Alice},\text{Bob},\text{Carol}\}$. We will suppose that this set refers to three honest-to-goodness real people. We would like to turn this set into a vector space.

Well, we have a problem: we can add vectors (they form an abelian group), but what the heck should:

$\text{Alice} + \text{Bob}$ even MEAN? Clearly, Alice and Bob aren't field elements, or group elements, or anything of the sort.

Well, we can use a clever trick computer programmers use; we create a Boolean function. This is nothing more than a function:

$f:X \to \{0,1\}$, where $f(x) = 1$ means "$x$ is true" and $f(x) = 0$ means "$x$" is false.

So we can create a function called:

"Are you Alice?". Such a function is called (because mathematicians love to make things really, really confusing) the characteristic function:

$\chi_{\text{Alice}}$.

We have three such functions, one for each person in our set.

Now we have a bijection:

$\phi:\{1,2,3\} \to \{\text{Alice},\text{Bob},\text{Carol}\}$. All this really says is we have three people, all different from each other.

In order to reduce the amount of typing I have to do, I am going to refer to these people henceforth as $A,B,C$. I hope this causes no confusion.

Now, in a feat of extraordinary mathematical sleight of hand, we can consider the following functions:

$\chi_A \circ \phi$
$\chi_B \circ \phi$
$\chi_C \circ \phi$.

Now these functions go from $\{1,2,3\}$ to the set $\{0,1\}$, so they are triples, namely:

$(1,0,0),(0,1,0),(0,0,1)$. If we map these to the standard basis vectors of $F^3$ (for any field $F$, which always has a 1 and a 0), we can now define:

$xA + yB + zC \leftrightarrow (x,y,z)$ and use the vector addition on $F^3$ to define a vector addition on a vector space with basis vectors Alice, Bob and Carol.

So Alice + Bob corresponds to $(1,1,0)$, which expressed in that basis remains simply "Alice + Bob", or if you prefer:

"one Alice and one Bob".
Hi Deveno ... thanks for a really great example to learn from ...

I really want to understand this fully ... so pardon a few possibly basic questions ... I don't want to wave my hand over things that look about right to me when my understanding is vague ... so here goes a few questions about things I follow only vaguely ...

====================================================

Question 1

When you write:

" ... ... Now we have a bijection:

$\phi:\{1,2,3\} \to \{\text{Alice},\text{Bob},\text{Carol}\}$. All this really says is we have three people, all different from each other. ... ... "

Do you mean the following bijection ...

\(\displaystyle \phi (1) =\) Alice

\(\displaystyle \phi (2) =\) Bob

\(\displaystyle \phi (3) =\) Carol Is that correct?

=====================================================

Question 2

You write:

" ... ... Now, in a feat of extraordinary mathematical sleight of hand, we can consider the following functions:

$\chi_A \circ \phi$
$\chi_B \circ \phi$
$\chi_C \circ \phi$.

Now these functions go from $\{1,2,3\}$ to the set $\{0,1\}$, so they are triples, namely:

$(1,0,0),(0,1,0),(0,0,1)$. ... ... ... "


My question relates to structure ... by which I mean how do you cal these functions "triples" ... surely a function is a set of ordered pairs ...

Indeed we have ...

\(\displaystyle \chi_A \circ \phi (1) = \chi_A ( \phi (1) ) = \chi_A (A) = 1 \)

\(\displaystyle \chi_A \circ \phi (2) = \chi_A ( \phi (2) ) = \chi_A (B) = 0 \)

\(\displaystyle \chi_A \circ \phi (3) = \chi_A ( \phi (3) ) = \chi_A (C) = 0 \)

Now a function is a set of ordered pairs no two of which has the same first member ...

... ... so \(\displaystyle \chi_A \circ \phi = \{ (1,1) , (2,0) , (3,0) \} \)

So my question is ... in what sense and why can we call the function \(\displaystyle \chi_A \circ \phi\) a "triple" ... ...

(Note: I can see that the set of images of the function is \(\displaystyle \{ 1 , 0 , 0 \}\) ... ... but even that is not an ordered set or sequence.)

=====================================================

Question 3 You write:

" ... ... If we map these to the standard basis vectors of $F^3$ (for any field $F$, which always has a 1 and a 0), we can now define:

$xA + yB + zC \leftrightarrow (x,y,z)$ and use the vector addition on $F^3$ to define a vector addition on a vector space with basis vectors Alice, Bob and Carol.

So Alice + Bob corresponds to $(1,1,0)$, which expressed in that basis remains simply "Alice + Bob" ... ... "
Can you explain exactly how we define

\(\displaystyle xA + yB + zC \)?

What are the \(\displaystyle x, y\) and \(\displaystyle z\) exactly ... ?

and how exactly (formally and rigorously) do we get Alice + Bob \(\displaystyle \equiv (1, 1, 0)\) ..

====================================================Hope you can clarify the issues above ...

Peter
 
  • #4
The triple $(a,b,c)$ of real numbers, let's say, *is* the function $f: \{1,2,3\} \to \Bbb R$ given by:

$f(1) = a$
$f(2) = b$
$f(3) = c$.

In other words, $(a,b,c)$ is a convenient short-hand for saying: I have a function which has three images, and they are ordered (by virtue of the natural order of the natural numbers) in the following way:

$a$ is the first image, $b$ is the second image, and $c$ is the third image.

Since $\{1,2,3\}$ is an ordered set, we use that SAME ordering to obtained the ordered image: $\{a,b,c\}$.

So $\chi_A \circ \phi$ represents the ordered triple $(1,0,0)$:

$1 \to \text{Alice} \to 1$
$2 \to \text{Bob} \to 0$ (Bob is not Alice)
$3 \to \text{Carol} \to 0$ (Carol is not Alice)

The "ordering" we obtain of $\{\text{Alice},\text{Bob},\text{Carol}\}$ is somewhat artificial, I did it alphabetically, but in truth any ordering would have worked (there are different bijections $\phi$ that are possible). That doesn't really matter-pick a bijection, and stick with it. We just want to make a vector space-there is no claim this is the only way we can do this (in fact, there are, for a set of $n$ elements, $n!$ such bijections, so we have 6 possible orderings of our set of three people. All of them lead to isomorphic vector spaces).

Recall that a SEQUENCE is just a function:

$f: \Bbb N \to X$, with $f(n) = a_n$, instead of writing:

$S = (a_0,a_1,a_2,\dots)$ we could just as easily, but not so handily, write:

$S= \{(0,f(0)), (1,(f(1)), (2,f(2)),\dots...\}$ which is correct, but cumbersome. I hope this answers question 2.

Yes, your question 1, is exactly what I intended.

As for question 3, the $x,y,z$ are scalars, or field elements. To evaluate what:

$xA + yB + zC$ means, we use the correspondence (bijection):

$A \sim (1,0,0)$
$B \sim (0,1,0)$
$C \sim (0,0,1)$

that is, using our composed function $\chi_A \circ \phi$, we've found a way to turn "Alice" into a basis vector (the IMPORTANT thing is our basis has the same number of elements as our set of three people).

So an expression like $\frac{1}{2}A + 3B - \pi C$ is "mentally transformed" into the triple $(\frac{1}{2},3,-\pi)$ , this "mental transformation" is given the rather-imposing sounding name of:

"formal linear combination".

and the resulting vector space is given the abstract-sounding name: "the free vector space over $F$ generated by Alice,Bob and Carol".

Now, in this example, let's see how the universal mapping property plays out:

Given ANY function $f:\{A,B,C\} \to V$ for any vector space $V$, we know there must be a UNIQUE linear mapping $L:F^3 \to V$ such that if $I$ is the inclusion of $\{A,B,C\}$ in $F^3$ as:

$A \mapsto (1,0,0)$
$B \mapsto (0,1,0)$
$C \mapsto (0,0,1)$

(this, strictly speaking, is not an inclusion, but remember our "mental transformation"-we are not "really" dealing with $F^3$ but "formal linear combinations" of $A,B,C$-s what I mean is not $F^3$ "actually", but the isomorphic space:

$\{xA + yB + zC:, x, y, z \in F\}$ and so we have the "inclusion" (which now seems more like a proper inclusion):

$A \mapsto A = 1A + 0B + 0C$
$B \mapsto B = 0A + 1B + 0C$
$C \mapsto C = 0A + 0B + 1C$)

we have $f= L \circ I$.

I will choose the function "____'s favorite polynomial". Now I know Alice, Bob and Carol "really well", and I can tell you that:

$f(A) = x^2 + 1$
$f(B) = x^2 + x + 1$
$f(C) = x^3 - x$ (Carol's an odd one, eh?)

(this is just an ordinary set-function-don't go trying to see if it is linear or any other such nonsense).

Using our basis of $\{A,B,C\}$ which "equals" (not really, but we have an isomorphism, so don't quibble) $\{(1,0,0),(0,1,0),(0,0,1)\}$, we need to find an $L$ (and it better be uniquely defined) such that:

$L: F^3 \to F[x]$

is a linear map such that:

$L(1,0,0) = x^2 + 1$
$L(0,1,0) = x^2 + x + 1$
$L(,0,0,1) = x^3 - x$.

I claim that function is (and you should verify this, and that it is indeed linear!):

$L(a,b,c) = cx^3 + (a+b)x^2 + (b-c)x + (a+b)$, or, if you prefer:

$L(aA + bB + cC) = cx^3 + (a+b)x^2 + (b-c)x + (a+b)$

and we have $L(A) = 0x^3 + (1+0)x^2 + (0-0)x + (1+0) = x^2 + 1 = f(A)$

$L(B) = 0x^3 + (0+1)x^2 + (1-0)x + (0+1) = x^2 + x + 1 = f(B)$

$L(C) = 1x^3 + (0+0)x^2 + (0-1)x + (0+0) = x^3 - x = f(C)$.
 
Last edited:
  • #5
Deveno said:
The triple $(a,b,c)$ of real numbers, let's say, *is* the function $f: \{1,2,3\} \to \Bbb R$ given by:

$f(1) = a$
$f(2) = b$
$f(3) = c$.

In other words, $(a,b,c)$ is a convenient short-hand for saying: I have a function which has three images, and they are ordered (by virtue of the natural order of the natural numbers) in the following way:

$a$ is the first image, $b$ is the second image, and $c$ is the third image.

Since $\{1,2,3\}$ is an ordered set, we use that SAME ordering to obtained the ordered image: $\{a,b,c\}$.

So $\chi_A \circ \phi$ represents the ordered triple $(1,0,0)$:

$1 \to \text{Alice} \to 1$
$2 \to \text{Bob} \to 0$ (Bob is not Alice)
$3 \to \text{Carol} \to 0$ (Carol is not Alice)

The "ordering" we obtain of $\{\text{Alice},\text{Bob},\text{Carol}\}$ is somewhat artificial, I did it alphabetically, but in truth any ordering would have worked (there are different bijections $\phi$ that are possible). That doesn't really matter-pick a bijection, and stick with it. We just want to make a vector space-there is no claim this is the only way we can do this (in fact, there are, for a set of $n$ elements, $n!$ such bijections, so we have 6 possible orderings of our set of three people. All of them lead to isomorphic vector spaces).

Recall that a SEQUENCE is just a function:

$f: \Bbb N \to X$, with $f(n) = a_n$, instead of writing:

$S = (a_0,a_1,a_2,\dots)$ we could just as easily, but not so handily, write:

$S= \{(0,f(0)), (1,(f(1)), (2,f(2)),\dots...\}$ which is correct, but cumbersome. I hope this answers question 2.

Yes, your question 1, is exactly what I intended.

As for question 3, the $x,y,z$ are scalars, or field elements. To evaluate what:

$xA + yB + zC$ means, we use the correspondence (bijection):

$A \sim (1,0,0)$
$B \sim (0,1,0)$
$C \sim (0,0,1)$

that is, using our composed function $\chi_A \circ \phi$, we've found a way to turn "Alice" into a basis vector (the IMPORTANT thing is our basis has the same number of elements as our set of three people).

So an expression like $\frac{1}{2}A + 3B - \pi C$ is "mentally transformed" into the triple $(\frac{1}{2},3,-\pi)$ , this "mental transformation" is given the rather-imposing sounding name of:

"formal linear combination".

and the resulting vector space is given the abstract-sounding name: "the free vector space over $F$ generated by Alice,Bob and Carol".

Now, in this example, let's see how the universal mapping property plays out:

Given ANY function $f:\{A,B,C\} \to V$ for any vector space $V$, we know there must be a UNIQUE linear mapping $L:F^3 \to V$ such that if $I$ is the inclusion of $\{A,B,C\}$ in $F^3$ as:

$A \mapsto (1,0,0)$
$B \mapsto (0,1,0)$
$C \mapsto (0,0,1)$

(this, strictly speaking, is not an inclusion, but remember our "mental transformation"-we are not "really" dealing with $F^3$ but "formal linear combinations" of $A,B,C$-s what I mean is not $F^3$ "actually", but the isomorphic space:

$\{xA + yB + zC:, x, y, z \in F\}$ and so we have the "inclusion" (which now seems more like a proper inclusion):

$A \mapsto A = 1A + 0B + 0C$
$B \mapsto B = 0A + 1B + 0C$
$C \mapsto C = 0A + 0B + 1C$)

we have $f= L \circ I$.

I will choose the function "____'s favorite polynomial". Now I know Alice, Bob and Carol "really well", and I can tell you that:

$f(A) = x^2 + 1$
$f(B) = x^2 + x + 1$
$f(C) = x^3 - x$ (Carol's an odd one, eh?)

(this is just an ordinary set-function-don't go trying to see if it is linear or any other such nonsense).

Using our basis of $\{A,B,C\}$ which "equals" (not really, but we have an isomorphism, so don't quibble) $\{(1,0,0),(0,1,0),(0,0,1)\}$, we need to find an $L$ (and it better be uniquely defined) such that:

$L: F^3 \to F[x]$

is a linear map such that:

$L(1,0,0) = x^2 + 1$
$L(0,1,0) = x^2 + x + 1$
$L(,0,0,1) = x^3 - x$.

I claim that function is (and you should verify this, and that it is indeed linear!):

$L(a,b,c) = cx^3 + (a+b)x^2 + (b-c)x + (a+b)$, or, if you prefer:

$L(aA + bB + cC) = cx^3 + (a+b)x^2 + (b-c)x + (a+b)$

and we have $L(A) = 0x^3 + (1+0)x^2 + (0-0)x + (1+0) = x^2 + 1 = f(A)$

$L(B) = 0x^3 + (0+1)x^2 + (1-0)x + (0+1) = x^2 + x + 1 = f(B)$

$L(C) = 1x^3 + (0+0)x^2 + (0-1)x + (0+0) = x^3 - x = f(C)$.
Thanks again for the help Deveno ...

Just trying to follow your example closely ... was OK until I came to your explanation of the UMP ... which I tried to match to Cooperstein's explanation of the UMP ... but then had some issues trying to match the two constructions ... ...

You write:

" ... ... Now, in this example, let's see how the universal mapping property plays out:

Given ANY function $f:\{A,B,C\} \to V$ for any vector space $V$, we know there must be a UNIQUE linear mapping $L:F^3 \to V$ such that if $I$ is the inclusion of $\{A,B,C\}$ in $F^3$ as: ... ... etc etc ... ... "
I tried to follow this by looking at Cooperstein ... and could not match the situations ... can you explain why your UMP seems different from Cooperstein's ... ...

I have drawn a diagram to illustrate my confusion over the UMP ... as follows ... ...
View attachment 5388Can you help ... ?

Why are the two approaches to the UMP apparently different... ? How do we reconcile the two approaches ... ...It may help if readers see more details on the Cooperstein approach so I am providing the relevant text as follows:https://www.physicsforums.com/attachments/5389
https://www.physicsforums.com/attachments/5390
 
  • #6
No, that is not right.

We are IDENTIFYING $FV(X)$ with $F^{|X|}$, you also do not include the mapping $X \to F[x]$ (which I called "favorite polynomial").

Cooperstein's $\iota$ is typically called "the inclusion of generators".

So "my diagram" should have an extra arrow you did not draw, and the arrow between $V$ and $F^3$ should be replaced by just $V \cong F^3$.

If you have a mapping $A \to C$, and a mapping $B \to C$, and $A$ and $B$ are isomorphic, you have a mapping $B \to C$, which is usually so obvious you don't even need to write it down.

Remember this: the "Vector space based on $X$" is not uniquely determined-it is only unique up to a unique isomorphism. This means that any isomorphic vector space will possesses the same UMP, we'll just have to use a different "iota" (inclusion, or more accurately: injective map), and a different linear map.

It's like this: if we have the maps:

$\iota: X \to V$ (inclusion of generators)
$L: V \to W$ (linear map from $V$ to $W$, where $W$ is arbitrary-this is the map the UMP says exists)
$f: X \to W$ (arbitrary FUNCTION from $X$ into $W$)

where $f = L \circ \iota$

and $\phi:V \to U$ is a linear isomorphism, then we also have:

$\iota': X \to U$
$L' : U \to W$
$f: X \to W$

we can simply take $\iota' = \phi \circ \iota$, and $L' = L \circ \phi^{-1}$, and then:

$L' \circ \iota' = (L \circ \phi^{-1})\circ (\phi \circ \iota) = L \circ (\phi^{-1}\circ \phi) \circ \iota$

$ = L \circ 1_V \circ \iota = L \circ \iota = f$

so $(U,\iota')$ satisfies the SAME UMP as $(V,\iota)$ does.

A general principle of abstract algebra is:

Isomorphic objects are essentially the same, only the names of things are changed.

So things that "seem quite different" like the set of complex numbers $\{1,i,-1,-i\}$ and the set of congruences modulo $4$: $\{\overline{0},\overline{1},\overline{2},\overline{3}\}$ may appear to be two different beasts, but they are, to an algebraist, just "the" cyclic group of order $4$ which is unique, up to isomorphism.

This is important, because you have to realize that in studying tensors, there is only "a" tensor product, which is only defined up to isomorphism by the universal mapping property. So the "abstract" form of a tensor product may "look" quite different than some concrete REALIZATION of it. In fact, one of the ways one often PROVES something IS a tensor product, is to show it satisfies the UMP, rather than "digging into its guts".
 
  • #7
Deveno said:
No, that is not right.

We are IDENTIFYING $FV(X)$ with $F^{|X|}$, you also do not include the mapping $X \to F[x]$ (which I called "favorite polynomial").

Cooperstein's $\iota$ is typically called "the inclusion of generators".

So "my diagram" should have an extra arrow you did not draw, and the arrow between $V$ and $F^3$ should be replaced by just $V \cong F^3$.

If you have a mapping $A \to C$, and a mapping $B \to C$, and $A$ and $B$ are isomorphic, you have a mapping $B \to C$, which is usually so obvious you don't even need to write it down.

Remember this: the "Vector space based on $X$" is not uniquely determined-it is only unique up to a unique isomorphism. This means that any isomorphic vector space will possesses the same UMP, we'll just have to use a different "iota" (inclusion, or more accurately: injective map), and a different linear map.

It's like this: if we have the maps:

$\iota: X \to V$ (inclusion of generators)
$L: V \to W$ (linear map from $V$ to $W$, where $W$ is arbitrary-this is the map the UMP says exists)
$f: X \to W$ (arbitrary FUNCTION from $X$ into $W$)

where $f = L \circ \iota$

and $\phi:V \to U$ is a linear isomorphism, then we also have:

$\iota': X \to U$
$L' : U \to W$
$f: X \to W$

we can simply take $\iota' = \phi \circ \iota$, and $L' = L \circ \phi^{-1}$, and then:

$L' \circ \iota' = (L \circ \phi^{-1})\circ (\phi \circ \iota) = L \circ (\phi^{-1}\circ \phi) \circ \iota$

$ = L \circ 1_V \circ \iota = L \circ \iota = f$

so $(U,\iota')$ satisfies the SAME UMP as $(V,\iota)$ does.

A general principle of abstract algebra is:

Isomorphic objects are essentially the same, only the names of things are changed.

So things that "seem quite different" like the set of complex numbers $\{1,i,-1,-i\}$ and the set of congruences modulo $4$: $\{\overline{0},\overline{1},\overline{2},\overline{3}\}$ may appear to be two different beasts, but they are, to an algebraist, just "the" cyclic group of order $4$ which is unique, up to isomorphism.

This is important, because you have to realize that in studying tensors, there is only "a" tensor product, which is only defined up to isomorphism by the universal mapping property. So the "abstract" form of a tensor product may "look" quite different than some concrete REALIZATION of it. In fact, one of the ways one often PROVES something IS a tensor product, is to show it satisfies the UMP, rather than "digging into its guts".
Thanks Deveno ...

Now working through your post and reflecting and thinking about what you have said ...

Peter
 
  • #8
Peter said:
Thanks Deveno ...

Now working through your post and reflecting and thinking about what you have said ...

Peter
Hi Deveno ... thanks so much for your previous posts on this topic ... I have been doing a lot of reflecting on what you have said ... but still have some issues ... hope you can clarify things further ... especially the representation of elements of \(\displaystyle V\) ...

Although I now understand that an m-tuple is a function ... I am still unsure about what is going on in Cooperstein's move in Theorem 10.1 where he considers the elements of the vector space \(\displaystyle Z\) to be of the form \(\displaystyle ( v_1, v_2, \ ... \ ... \ , \ v_m )\) ... ...

Now Cooperstein defines \(\displaystyle V\) by \(\displaystyle V = \mathcal{M}_{ fin } ( X, \mathbb{F} )\) ... ...

So \(\displaystyle V\) is the set of all functions \(\displaystyle f \ : \ X \longrightarrow \mathbb{F}\) such that the support of \(\displaystyle f\) is finite ... ...

Cooperstein defines \(\displaystyle \iota \ : \ X \longrightarrow V\) by \(\displaystyle \iota (x) = \chi_x\) ...

... and shows that \(\displaystyle \mathcal{B} \{ \chi_x | x \in X \}\) is a basis for V ...So an element of \(\displaystyle V\) would be (I think ... am I correct?)

\(\displaystyle f = c_1 \chi_{x_1} + c_2 \chi_{x_2} + \ ... \ ... \ , \ c_m \chi_{x_m} \)and another element would be \(\displaystyle f = c'_1 \chi_{x'_1} + c'_2 \chi_{x'_2} + \ ... \ ... \ , \ c'_n \chi_{x'_n} \)and we could formally add these so

\(\displaystyle f + f' =\) \(\displaystyle ( c_1 \chi_{x_1} + c_2 \chi_{x_2} + \ ... \ ... \ , \ c_m \chi_{x_m} ) \) + \(\displaystyle ( c'_1 \chi_{x'_1} + c'_2 \chi_{x'_2} + \ ... \ ... \ , \ c'_n \chi_{x'_n} ) \)Is that right?... ... BUT ... ... in the proof of Theorem 10.1 Cooperstein writes the elements of V as \(\displaystyle ( v_1, v_2, \ ... \ ... \ , \ v_m )\) and \(\displaystyle ( v'_1, v'_2, \ ... \ ... \ , \ v'_m )\) ... ...

... ... ? ... ... is this just a convenient way to write \(\displaystyle f\) and \(\displaystyle f'\) ... ? ... if so, is it a problem that f and f' may have a different number of terms in their sums due to different supports ...

How is Coopersteins terminology of \(\displaystyle ( v_1, v_2, \ ... \ ... \ , \ v_m )\) for an element of V justified?

Hope you can help ...

Peter===========================================================For the convenience of readers of this post I am providing the text of Cooperstein's introduction to Section 10.1: Introduction to Tensor Products ... ... the text will include the statement of Theorem 10.1 and the start of the proof ... ... as follows:View attachment 5421
View attachment 5422Sorry ... cannot provide more ... hit quota limit ...

See previous posts on this thread ...

Peter
 
Last edited:
  • #9
Peter said:
Hi Deveno ... thanks so much for your previous posts on this topic ... I have been doing a lot of reflecting on what you have said ... but still have some issues ... hope you can clarify things further ... especially the representation of elements of \(\displaystyle V\) ...

Although I now understand that an m-tuple is a function ... I am still unsure about what is going on in Cooperstein's move in Theorem 10.1 where he considers the elements of the vector space \(\displaystyle Z\) to be of the form \(\displaystyle ( v_1, v_2, \ ... \ ... \ , \ v_m )\) ... ...

Now Cooperstein defines \(\displaystyle V\) by \(\displaystyle V = \mathcal{M}_{ fin } ( X, \mathbb{F} )\) ... ...

So \(\displaystyle V\) is the set of all functions \(\displaystyle f \ : \ X \longrightarrow \mathbb{F}\) such that the support of \(\displaystyle f\) is finite ... ...

Cooperstein defines \(\displaystyle \iota \ : \ X \longrightarrow V\) by \(\displaystyle \iota (x) = \chi_x\) ...

... and shows that \(\displaystyle \mathcal{B} \{ \chi_x | x \in X \}\) is a basis for V ...So an element of \(\displaystyle V\) would be (I think ... am I correct?)

\(\displaystyle f = c_1 \chi_{x_1} + c_2 \chi_{x_2} + \ ... \ ... \ , \ c_m \chi_{x_m} \)and another element would be \(\displaystyle f = c'_1 \chi_{x'_1} + c'_2 \chi_{x'_2} + \ ... \ ... \ , \ c'_n \chi_{x'_n} \)and we could formally add these so

\(\displaystyle f + f' =\) \(\displaystyle ( c_1 \chi_{x_1} + c_2 \chi_{x_2} + \ ... \ ... \ , \ c_m \chi_{x_m} ) \) + \(\displaystyle ( c'_1 \chi_{x'_1} + c'_2 \chi_{x'_2} + \ ... \ ... \ , \ c'_n \chi_{x'_n} ) \)Is that right?

Yes, but the characteristic (i.e., boolean) functions are typically suppressed, it's more convenient to just write $x_j$ than $\chi_{x_j}$ (there is an obvious bijection between the two). You can think of it this way: a characteristic function just "picks" an element $x_j$ out of the set $X = \{x_1,\dots,x_n\}$. Writing it ($x_j$) on a piece of paper accomplishes much the same thing (although it may lead to rather long discussions of: "what do you mean by that?").
... ... BUT ... ... in the proof of Theorem 10.1 Cooperstein writes the elements of V as \(\displaystyle ( v_1, v_2, \ ... \ ... \ , \ v_m )\) and \(\displaystyle ( v'_1, v'_2, \ ... \ ... \ , \ v'_m )\) ... ...

... ... ? ... ... is this just a convenient way to write \(\displaystyle f\) and \(\displaystyle f'\) ... ? ... if so, is it a problem that f and f' may have a different number of terms in their sums due to different supports ...

How is Coopersteins terminology of \(\displaystyle ( v_1, v_2, \ ... \ ... \ , \ v_m )\) for an element of V justified?

If we adopt the convention that each $v_i \in V_i$, then certainly such $n$-tuples are SOME of the elements in the free vector space generated by the $V_i$ (or rather their cartesian product). But they certainly aren't ALL the elements, we need any formal linear combination thereof. The difference in the supports is a non-issue, since both are finite, we can take the maximum of the two supports, and pad the lesser with "extra 0-vectors".

Hope you can help ...

Peter===========================================================For the convenience of readers of this post I am providing the text of Cooperstein's introduction to Section 10.1: Introduction to Tensor Products ... ... the text will include the statement of Theorem 10.1 and the start of the proof ... ... as follows:
Sorry ... cannot provide more ... hit quota limit ...

See previous posts on this thread ...

Peter

It's easier to see what it is going on by just considering two vector spaces, and proceeding inductively. Let's use $V_1 = V_2 = \Bbb R$.

Then the elements of $FV(\Bbb R \times \Bbb R)$ are formal linear combinations of points $(x,y)$, so for example:

$(1,2) + (3,2)$ cannot be reduced further. As long as our points are *distinct* any linear combination of them is "irreducible".

We can add two such formal linear combinations together like so:

$[3(1,2) + 4(1,1) - 6(-2,1)] + [2(-1,1) + (1,2)] = 4(1,2) + 4(1,1) - 6(-2,1) + 2(-1,1)$

where the only "reduction" we could take was: $3(1,2) + (1,2) = 4(1,2)$ because these shared a common basis element.

Generally, Cooperstein's elements of the free vector space generated by a product of vector spaces will have formal linear combinations of "tuples of vectors" (each "coordinate" comes from a different vector space), which should not be confused with "tuples of scalars".
 
  • #10
Deveno said:
Yes, but the characteristic (i.e., boolean) functions are typically suppressed, it's more convenient to just write $x_j$ than $\chi_{x_j}$ (there is an obvious bijection between the two). You can think of it this way: a characteristic function just "picks" an element $x_j$ out of the set $X = \{x_1,\dots,x_n\}$. Writing it ($x_j$) on a piece of paper accomplishes much the same thing (although it may lead to rather long discussions of: "what do you mean by that?").

If we adopt the convention that each $v_i \in V_i$, then certainly such $n$-tuples are SOME of the elements in the free vector space generated by the $V_i$ (or rather their cartesian product). But they certainly aren't ALL the elements, we need any formal linear combination thereof. The difference in the supports is a non-issue, since both are finite, we can take the maximum of the two supports, and pad the lesser with "extra 0-vectors".
It's easier to see what it is going on by just considering two vector spaces, and proceeding inductively. Let's use $V_1 = V_2 = \Bbb R$.

Then the elements of $FV(\Bbb R \times \Bbb R)$ are formal linear combinations of points $(x,y)$, so for example:

$(1,2) + (3,2)$ cannot be reduced further. As long as our points are *distinct* any linear combination of them is "irreducible".

We can add two such formal linear combinations together like so:

$[3(1,2) + 4(1,1) - 6(-2,1)] + [2(-1,1) + (1,2)] = 4(1,2) + 4(1,1) - 6(-2,1) + 2(-1,1)$

where the only "reduction" we could take was: $3(1,2) + (1,2) = 4(1,2)$ because these shared a common basis element.

Generally, Cooperstein's elements of the free vector space generated by a product of vector spaces will have formal linear combinations of "tuples of vectors" (each "coordinate" comes from a different vector space), which should not be confused with "tuples of scalars".

Thanks for the further help, Deveno ...

Just some further issues ...

After reflecting on your post above (and keeping in mind your previous posts on the topic) it seems that we could regard the map \(\displaystyle \iota\) from \(\displaystyle X = V_1 \times V_2 \times \ ... \ ... \ \times V_m\) to the vector space \(\displaystyle Z\) as an inclusion map ... is that right?I am also getting a little confused about the nature of \(\displaystyle X = V_1 \times V_2 \times \ ... \ ... \ \times V_m\) ... as the Cartesian product of \(\displaystyle m\) vector spaces ... is it just a set? ... or is it a vector space? ... ... I was regarding it as simply a set ... then I read the following statement in the proof of Theorem 10.1 ..." ... ... Since both \(\displaystyle (v_1, \ ... \ , v_{i-1}, cv_i, v_{i+1}, \ ... \ , v_m )\) and \(\displaystyle (v_1, \ ... \ ... \ , v_m )\) are elements of \(\displaystyle X = V_1 \times V_2 \times \ ... \ ... \ \times V_m \) ... ... "The presence of \(\displaystyle cv_i\) in the \(\displaystyle X\) element \(\displaystyle (v_1, \ ... \ , v_{i-1}, cv_i, v_{i+1}, \ ... \ , v_m )\) is surely peculiar if \(\displaystyle X\) is purely a set?

Can you clarify ... ?

Peter
 
Last edited:
  • #11
Peter said:
Thanks for the further help, Deveno ...

Just some further issues ...

After reflecting on your post above (and keeping in mind your previous posts on the topic) it seems that we could regard the map \(\displaystyle \iota\) from \(\displaystyle X = V_1 \times V_2 \times \ ... \ ... \ \times V_m\) to the vector space \(\displaystyle Z\) as an inclusion map ... is that right?I am also getting a little confused about the nature of \(\displaystyle X = V_1 \times V_2 \times \ ... \ ... \ \times V_m\) ... as the Cartesian product of \(\displaystyle m\) vector spaces ... is it just a set? ... or is it a vector space? ... ... I was regarding it as simply a set ... then I read the following statement in the proof of Theorem 10.1 ..." ... ... Since both \(\displaystyle (v_1, \ ... \ , v_{i-1}, cv_i, v_{i+1}, \ ... \ , v_m )\) and \(\displaystyle (v_1, \ ... \ ... \ , v_m )\) are elements of \(\displaystyle X = V_1 \times V_2 \times \ ... \ ... \ \times V_m \) ... ... "The presence of \(\displaystyle cv_i\) in the \(\displaystyle X\) element \(\displaystyle (v_1, \ ... \ , v_{i-1}, cv_i, v_{i+1}, \ ... \ , v_m )\) is surely peculiar if \(\displaystyle X\) is purely a set?

Can you clarify ... ?

Peter

The cartesian product IS being regarded as "just a set", but each of the "coordinates" is a vector, and as long as we stay "within one coordinate", things like $cv_i$ and $v_i+v'_i$ make sense. To put it another way, in $FV(V_1 \times \cdots \times V_n)$:

$c(v_1,\dots,v_i,\dots,v_n) \neq (cv_1,\dots,cv_i,\dots,cv_n)$, these two elements are linearly independent.

In fact, even after we take the quotient ($u \otimes v$ is just shorthand for $(u,v) + V_0$ in the quotient space, when we are just tensoring two spaces. The multilinear map $\mu$ is often just called "$\otimes$"), we don't have:

$c(v_1\otimes\cdots\otimes v_i\otimes \cdots \otimes v_n) = (cv_1\otimes\cdots\otimes cv_i\otimes\cdots\otimes cv_n)$

but rather:

$c^n(v_1\otimes\cdots\otimes v_i\otimes \cdots \otimes v_n) = (cv_1\otimes\cdots\otimes cv_i\otimes\cdots\otimes cv_n)$

The vector space $V_1 \times \cdots \times V_n$, as a vector space has dimension $\dim(V_1) + \cdots + \dim(V_n)$.

The vector space $FV(V_1 \times \cdots \times V_n)$ has dimension $|F|^{|F|^{\dim(V_1)} \cdots |F|^{\dim(V_n)}}$ which is much, much larger - every element of $V_1 \times\cdots\times V_n$ is a linearly independent basis vector, and we take "formal linear combinations" of these, resulting in unbelievably long and complicated possible expressions.
 
  • #12
Deveno said:
There are different ways authors discuss the question: "what is an $n$-tuple?"

In fact, some don't even discuss it at all, but rather take it as "obvious" one should know what an $n$-tuple is.

But formally, an $n$-tuple is a FUNCTION:

$f: \{1,2,\dots,n\} \to X$, where $X$ can be ANY SET.

So $f(j) \in X$, for each $j$, and it is common to represent the image of $j$ as $x_j$, and the ENTIRE FUNCTION $f$ as:

$(x_1,x_2,\dots,x_n)$.

Equivalently, for FINITE $n$, one can define an $n$-tuple as an element of the $n$-fold cartesian product:

$X \times X \times\cdots \times X$

The "indexed" version I gave first generalizes much better for infinite sets, because with "infinite tuples" it can be unclear how (or downright impossible) to list them as a linear array.

***********

Ok, so now let's talk about what "the vector space based on $X$" is. I will use a down-to-earth example.

Let $X = \{\text{Alice},\text{Bob},\text{Carol}\}$. We will suppose that this set refers to three honest-to-goodness real people. We would like to turn this set into a vector space.

Well, we have a problem: we can add vectors (they form an abelian group), but what the heck should:

$\text{Alice} + \text{Bob}$ even MEAN? Clearly, Alice and Bob aren't field elements, or group elements, or anything of the sort.

Well, we can use a clever trick computer programmers use; we create a Boolean function. This is nothing more than a function:

$f:X \to \{0,1\}$, where $f(x) = 1$ means "$x$ is true" and $f(x) = 0$ means "$x$" is false.

So we can create a function called:

"Are you Alice?". Such a function is called (because mathematicians love to make things really, really confusing) the characteristic function:

$\chi_{\text{Alice}}$.

We have three such functions, one for each person in our set.

Now we have a bijection:

$\phi:\{1,2,3\} \to \{\text{Alice},\text{Bob},\text{Carol}\}$. All this really says is we have three people, all different from each other.

In order to reduce the amount of typing I have to do, I am going to refer to these people henceforth as $A,B,C$. I hope this causes no confusion.

Now, in a feat of extraordinary mathematical sleight of hand, we can consider the following functions:

$\chi_A \circ \phi$
$\chi_B \circ \phi$
$\chi_C \circ \phi$.

Now these functions go from $\{1,2,3\}$ to the set $\{0,1\}$, so they are triples, namely:

$(1,0,0),(0,1,0),(0,0,1)$. If we map these to the standard basis vectors of $F^3$ (for any field $F$, which always has a 1 and a 0), we can now define:

$xA + yB + zC \leftrightarrow (x,y,z)$ and use the vector addition on $F^3$ to define a vector addition on a vector space with basis vectors Alice, Bob and Carol.

So Alice + Bob corresponds to $(1,1,0)$, which expressed in that basis remains simply "Alice + Bob", or if you prefer:

"one Alice and one Bob".

Hi Deveno,

I was revising the idea of an n-tuple as a function ... or indeed of an n-tuple being identified with a characteristic function ...

... and hence revising your post ... but began to get puzzled about the role of and need for the function \(\displaystyle \phi\) in your post ...Basically if we have an ordered set \(\displaystyle X = \{ A, B, C \}\) (using shorthand for Alice, Bob and Carol ... )the we define characteristic functions \(\displaystyle \chi_A \ : \ X \longrightarrow \{ 0.1 \}\)

\(\displaystyle \chi_B \ : \ X \longrightarrow \{ 0.1 \}\)

and

\(\displaystyle \chi_C \ : \ X \longrightarrow \{ 0.1 \}\)Then

\(\displaystyle \chi_A (A) = 1\)

\(\displaystyle \chi_A (B) = 0 \)

\(\displaystyle \chi_A (C) = 0 \)so, if we retain order of domain as the order of the image set (can we?)then we get \(\displaystyle \chi_A\) is the triple or 3-tuple \(\displaystyle (1, 0, 0)\)

similarly we can determine that

\(\displaystyle \chi_B\) is the triple or 3-tuple \(\displaystyle (0, 1, 0)\)

and

\(\displaystyle \chi_C\) is the triple or 3-tuple \(\displaystyle (0, 0, 1)\)
so we have that the images of the characteristic functions with the order of the domain imposed on them give us 3-tuples ...... ... my question then is ... why do we need the function \(\displaystyle \phi\)?



Hope you can help ...

Peter*** NOTE/QUESTION ***

... ... can you give a text or set of online notes that covers this theory ... ...
 
  • #13
It's just a set bijection. Normally, we're used to representing vectors with "coordinates", and we use an *indexing* set (which we represent by using subscripts) to "tag" the coordinates. We don't NEED to do this, but it seems more readily understandable to speak of the "first" coordinate (the scalar $x$ in the linear combination:

$xA + yB +zC$)

than the "alice-th" coordinate.

The whole purpose of $\phi$ is to get our "types" matching, so that we have a "strict" triple (a function from ${1,2,3}$), instead of a "loose one" (indexed by Alice, Bob and Carol). It's perfectly "possible" to index by names, for example, we could write points in the plane as:

$P = (3_{\text{left}},-2_{\text{up}})$

but natural numbers, by virtue of their already well-understood ORDER, make much better candidates.

The whole POINT of creating the vector space based on the set $X$, uses only ONE feature of $X$-it's SIZE, and *any* set with the same size (that is, any set for which we can produce a set bijection with $X$) will produce an isomorphic vector space. That's why the vector space based on $X$ is only unique "up to isomorphism".

When we use this space to create a tensor product, our main goal is to "start with something we know will be big enough to whittle down to what we need".

It's not unlike creating the free group from a set, instead of making "words", we make "formal linear combinations". The complicated machinery discussed in this thread is SOLELY for the purpose of showing this CAN be done, in at least ONE way. And the REASON we want to do this, is to show that we CAN make a quotient of this vector space that accomplishes our goal of turning multilinear maps into linear maps out of our "special" creation-which we then dub the tensor product.
 
  • #14
Deveno said:
It's just a set bijection. Normally, we're used to representing vectors with "coordinates", and we use an *indexing* set (which we represent by using subscripts) to "tag" the coordinates. We don't NEED to do this, but it seems more readily understandable to speak of the "first" coordinate (the scalar $x$ in the linear combination:

$xA + yB +zC$)

than the "alice-th" coordinate.

The whole purpose of $\phi$ is to get our "types" matching, so that we have a "strict" triple (a function from ${1,2,3}$), instead of a "loose one" (indexed by Alice, Bob and Carol). It's perfectly "possible" to index by names, for example, we could write points in the plane as:

$P = (3_{\text{left}},-2_{\text{up}})$

but natural numbers, by virtue of their already well-understood ORDER, make much better candidates.

The whole POINT of creating the vector space based on the set $X$, uses only ONE feature of $X$-it's SIZE, and *any* set with the same size (that is, any set for which we can produce a set bijection with $X$) will produce an isomorphic vector space. That's why the vector space based on $X$ is only unique "up to isomorphism".

When we use this space to create a tensor product, our main goal is to "start with something we know will be big enough to whittle down to what we need".

It's not unlike creating the free group from a set, instead of making "words", we make "formal linear combinations". The complicated machinery discussed in this thread is SOLELY for the purpose of showing this CAN be done, in at least ONE way. And the REASON we want to do this, is to show that we CAN make a quotient of this vector space that accomplishes our goal of turning multilinear maps into linear maps out of our "special" creation-which we then dub the tensor product.
Thanks Deveno ... That issue is much clearer now

... now revising the other posts in the thread ...

... any recommended texts for for the above theory?

Peter
 

FAQ: Proof of Existence of Tensor Product .... Cooperstein ....

What is the tensor product in mathematics?

The tensor product is a mathematical operation that combines two vector spaces to create a new vector space. It is used to represent the relationship between two vector spaces and allows for the representation of multidimensional data.

How is the tensor product calculated?

The tensor product is calculated by taking the Cartesian product of the two vector spaces and then performing certain operations on the resulting elements to create the new vector space. These operations can vary depending on the specific context and application of the tensor product.

What is the significance of the tensor product in physics?

The tensor product is used in physics to represent physical quantities that have both magnitude and direction, such as force, velocity, and electromagnetic fields. It allows for the representation of these quantities in multiple dimensions and is an essential tool for understanding and describing physical systems.

How does the tensor product relate to the Cooperstein proof of existence?

The Cooperstein proof of existence is a mathematical proof that shows the existence of a tensor product for a given set of vector spaces. It provides a rigorous mathematical framework for understanding and calculating the tensor product.

Can the tensor product be extended to more than two vector spaces?

Yes, the tensor product can be extended to any number of vector spaces. This is known as the n-fold tensor product and is used in many applications, including quantum mechanics and differential geometry.

Similar threads

Replies
2
Views
2K
Replies
4
Views
1K
Replies
1
Views
950
Replies
1
Views
1K
Back
Top