Does Lemma 3 in Winitzki's Tensor Product Dimension Section Clarify Covectors?

  • MHB
  • Thread starter Math Amateur
  • Start date
  • Tags
    Tensors
In summary, the conversation is about the proof of Lemma 3 in Section 1.7.3 of Sergei Winitzki's book "Linear Algebra via Exterior Products". The speaker is having trouble understanding how to show the existence of a covector f^* ∈ V^* such that f^* ( v_j ) = δ_{j_1j} for j = 1, ..., n. They are seeking help to understand this proof from first principles and are questioning the relevance of Exercise 1 in Section 6. The conversation also briefly discusses the concept of dual bases and provides examples of how covectors work.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Segei Winitzki's book: Linear Algebra via Exterior Products ...

I am currently focused on Section 1.7.3 Dimension of a Tensor Product is the Product of the Dimensions ... ...

I need help in order to get a clear understanding of an aspect of the proof of Lemma 3 in Section 1.7.3 ...

The relevant part of Winitzki's text reads as follows:https://www.physicsforums.com/attachments/5357
https://www.physicsforums.com/attachments/5358
In the above quotation from Winitzki we read the following:

" ... ... By the result of Exercise 1 in Sec. 6.3 there exists a covector \(\displaystyle f^* \in V^*\) such that

\(\displaystyle f^* ( v_j ) = \delta_{ j_1 j }\) for \(\displaystyle j = 1, \ ... \ ... \ , \ n\) ... ... "
I cannot see how to show that there exists a covector \(\displaystyle f^* \in V^*\) such that

\(\displaystyle f^* ( v_j ) = \delta_{ j_1 j }\) for \(\displaystyle j = 1, \ ... \ ... \ , \ n\) ... ...Can someone help me to show this from first principles ... ?It may be irrelevant to my problem ... but I cannot see the relevance of Exercise 1 in Section 6 which reads as follows:View attachment 5359Exercise 1 refers to Example 2 which reads as follows:https://www.physicsforums.com/attachments/5360
View attachment 5361
BUT ... since I wish to show the result:

... ... ... "there exists a covector \(\displaystyle f^* \in V^*\) such that

\(\displaystyle f^* ( v_j ) = \delta_{ j_1 j }\) for \(\displaystyle j = 1, \ ... \ ... \ , \ n\) ... ..."... from first principles the above example is irrelevant ... BUT then ... I cannot see its relevance anyway!Hope someone can help ... ...

Peter
===========================================================

*** NOTE ***

To help readers understand Winitzki's approach and notation for tensors I am providing Winitzki's introduction to Section 1.7 ... ... as follows ... ... :View attachment 5362
View attachment 5363
View attachment 5364
 
Last edited:
Physics news on Phys.org
  • #2
Extend $\{v_1,\dots,v_k\}$ to a basis $B$ for $V$.

Then we may take $f^{\ast} = v_{j_1}^{\ast}$, which returns the $j_1$-th coordinate of the vector $v$ in the basis $B$.

Here are some very *basic* examples of how co-vectors work:

Suppose $V = \Bbb R^3$ and we have the basis $\{(1,0,0),(0,1,0),(0,0,1)\}$.

Let's write these as column vectors, that is, 3x1 matrices.

Now, we know that using this basis, we can write any linear function $\Bbb R^3 \to \Bbb R$ (these are the elements of $(\Bbb R^3)^{\ast}$) as a 3x1 matrix, say:

$\begin{bmatrix}a&b&c\end{bmatrix}$.

This, in turn, can be written as the linear combination of 3x1 matrices:

$a\begin{bmatrix}1&0&0\end{bmatrix} + b\begin{bmatrix}0&1&0\end{bmatrix}+c\begin{bmatrix}0&0&1\end{bmatrix}$

where it should now be clear that the dual basis to the standard basis of column vectors, is the standard basis of ROW vectors. We can take the "outer product" of these to get a 3x3 matrix, the identity matrix $(\delta_{ij})$ (whose entries are all 9 possible "inner products").

As functions, the dual basis is made of these three functions:

$(x,y,z) \mapsto x$
$(x,y,z) \mapsto y$
$(x,y,z) \mapsto z$.

You may see these functions called different names, sometimes the first is called:

$e_1^{\ast}$, or $\pi_1$ (the first projection function), or (more tantalizingly): $dx$.

The thing to keep in mind is that any time we turn "vectors" into "numbers", some way of associating a vector with an element of $F^n$ is lurking in the background, we are invoking a "coordinate system". Coordinate systems are not uniquely defined, we CHOOSE them. This choice affect WHICH matrices we get, and thus "what our dual vectors look like". For example, in the polynomial function space (a subspace of all functions $f:\Bbb R \to \Bbb R$):

$P_2 = \{f|f(t) = a_0 + a_1t + a_2t^2: a_0,a_1,a_2 \in \Bbb R\}$ with the basis (of the functions with images) $\{1,t-1,t^2+t\}$ the dual basis is not made up of the functions which return the constant, linear or quadratic coefficients, but rather this set of functions:

$\{L_1,L_2,L_3: P_2 \to \Bbb R\}$,

where if $f(t) = a_0 + a_1t + a_2t^2$, then:

$L_1(f) = a_0 + a_1 - a_2$,
$L_2(f) = a_1 - a_2$,
$L_3(f) = a_2$.

In the "normal" (usual/standard) basis, the co-vector that is dual to $t$ (the function $f:f(t) = t$) returns a value of $3$ for the polynomial $2 + 3t + t^2$. In our "non-standard" basis, this polynomial has the coefficient-vector $(4,2,1)$, since:

$4 + (2)(t - 1) + (1)(t^2+t) = 4 + 2t - 2 + t^2 + t = (4 - 2) + (2 + 1)t + t^2 = 2 + 3t + t^2$

And so $L_2(2 + 3t + t^2) = 2$ (see, this is a different function-it gives a different "number" from the same polynomial than the standard dual basis vector does).

However, IN THE BASIS $B = \{b_1 = 1,b_2 = t-1,b_3 = t^2+t\}$, we see that:

$L_2(b_2) = L_2(t-1) = 1$, while:

$L_2(b_1) = L_2(1) = 0$, and $L_2(b_3) = L_2(t^2 + t) = 1 - 1 = 0$, and you may verify that indeed:

$L_i(b_j) = \delta_{ij}$,

so in this basis, the $L_i$ have the expected matrices: $\begin{bmatrix}1&0&0\end{bmatrix},\begin{bmatrix}0&1&0\end{bmatrix},\begin{bmatrix}0&0&1\end{bmatrix}$,

whereas in the standard basis we have the matrices for the $L_i$:

$\begin{bmatrix}1&1&-1\end{bmatrix}, \begin{bmatrix}0&1&-1\end{bmatrix},\begin{bmatrix}0&0&1\end{bmatrix}$
 
  • #3
Deveno said:
Extend $\{v_1,\dots,v_k\}$ to a basis $B$ for $V$.

Then we may take $f^{\ast} = v_{j_1}^{\ast}$, which returns the $j_1$-th coordinate of the vector $v$ in the basis $B$.

Here are some very *basic* examples of how co-vectors work:

Suppose $V = \Bbb R^3$ and we have the basis $\{(1,0,0),(0,1,0),(0,0,1)\}$.

Let's write these as column vectors, that is, 3x1 matrices.

Now, we know that using this basis, we can write any linear function $\Bbb R^3 \to \Bbb R$ (these are the elements of $(\Bbb R^3)^{\ast}$) as a 3x1 matrix, say:

$\begin{bmatrix}a&b&c\end{bmatrix}$.

This, in turn, can be written as the linear combination of 3x1 matrices:

$a\begin{bmatrix}1&0&0\end{bmatrix} + b\begin{bmatrix}0&1&0\end{bmatrix}+c\begin{bmatrix}0&0&1\end{bmatrix}$

where it should now be clear that the dual basis to the standard basis of column vectors, is the standard basis of ROW vectors. We can take the "outer product" of these to get a 3x3 matrix, the identity matrix $(\delta_{ij})$ (whose entries are all 9 possible "inner products").

As functions, the dual basis is made of these three functions:

$(x,y,z) \mapsto x$
$(x,y,z) \mapsto y$
$(x,y,z) \mapsto z$.

You may see these functions called different names, sometimes the first is called:

$e_1^{\ast}$, or $\pi_1$ (the first projection function), or (more tantalizingly): $dx$.

The thing to keep in mind is that any time we turn "vectors" into "numbers", some way of associating a vector with an element of $F^n$ is lurking in the background, we are invoking a "coordinate system". Coordinate systems are not uniquely defined, we CHOOSE them. This choice affect WHICH matrices we get, and thus "what our dual vectors look like". For example, in the polynomial function space (a subspace of all functions $f:\Bbb R \to \Bbb R$):

$P_2 = \{f|f(t) = a_0 + a_1t + a_2t^2: a_0,a_1,a_2 \in \Bbb R\}$ with the basis (of the functions with images) $\{1,t-1,t^2+t\}$ the dual basis is not made up of the functions which return the constant, linear or quadratic coefficients, but rather this set of functions:

$\{L_1,L_2,L_3: P_2 \to \Bbb R\}$,

where if $f(t) = a_0 + a_1t + a_2t^2$, then:

$L_1(f) = a_0 + a_1 - a_2$,
$L_2(f) = a_1 - a_2$,
$L_3(f) = a_2$.

In the "normal" (usual/standard) basis, the co-vector that is dual to $t$ (the function $f:f(t) = t$) returns a value of $3$ for the polynomial $2 + 3t + t^2$. In our "non-standard" basis, this polynomial has the coefficient-vector $(4,2,1)$, since:

$4 + (2)(t - 1) + (1)(t^2+t) = 4 + 2t - 2 + t^2 + t = (4 - 2) + (2 + 1)t + t^2 = 2 + 3t + t^2$

And so $L_2(2 + 3t + t^2) = 2$ (see, this is a different function-it gives a different "number" from the same polynomial than the standard dual basis vector does).

However, IN THE BASIS $B = \{b_1 = 1,b_2 = t-1,b_3 = t^2+t\}$, we see that:

$L_2(b_2) = L_2(t-1) = 1$, while:

$L_2(b_1) = L_2(1) = 0$, and $L_2(b_3) = L_2(t^2 + t) = 1 - 1 = 0$, and you may verify that indeed:

$L_i(b_j) = \delta_{ij}$,

so in this basis, the $L_i$ have the expected matrices: $\begin{bmatrix}1&0&0\end{bmatrix},\begin{bmatrix}0&1&0\end{bmatrix},\begin{bmatrix}0&0&1\end{bmatrix}$,

whereas in the standard basis we have the matrices for the $L_i$:

$\begin{bmatrix}1&1&-1\end{bmatrix}, \begin{bmatrix}0&1&-1\end{bmatrix},\begin{bmatrix}0&0&1\end{bmatrix}$
... thanks so much for this detailed and helpful post, Deveno ... appreciate your support for my attempt to understand tensors!

Just working carefully through the detail of your post ...

Peter
 
  • #4
Deveno said:
Extend $\{v_1,\dots,v_k\}$ to a basis $B$ for $V$.

Then we may take $f^{\ast} = v_{j_1}^{\ast}$, which returns the $j_1$-th coordinate of the vector $v$ in the basis $B$.

Here are some very *basic* examples of how co-vectors work:

Suppose $V = \Bbb R^3$ and we have the basis $\{(1,0,0),(0,1,0),(0,0,1)\}$.

Let's write these as column vectors, that is, 3x1 matrices.

Now, we know that using this basis, we can write any linear function $\Bbb R^3 \to \Bbb R$ (these are the elements of $(\Bbb R^3)^{\ast}$) as a 3x1 matrix, say:

$\begin{bmatrix}a&b&c\end{bmatrix}$.

This, in turn, can be written as the linear combination of 3x1 matrices:

$a\begin{bmatrix}1&0&0\end{bmatrix} + b\begin{bmatrix}0&1&0\end{bmatrix}+c\begin{bmatrix}0&0&1\end{bmatrix}$

where it should now be clear that the dual basis to the standard basis of column vectors, is the standard basis of ROW vectors. We can take the "outer product" of these to get a 3x3 matrix, the identity matrix $(\delta_{ij})$ (whose entries are all 9 possible "inner products").

As functions, the dual basis is made of these three functions:

$(x,y,z) \mapsto x$
$(x,y,z) \mapsto y$
$(x,y,z) \mapsto z$.

You may see these functions called different names, sometimes the first is called:

$e_1^{\ast}$, or $\pi_1$ (the first projection function), or (more tantalizingly): $dx$.

The thing to keep in mind is that any time we turn "vectors" into "numbers", some way of associating a vector with an element of $F^n$ is lurking in the background, we are invoking a "coordinate system". Coordinate systems are not uniquely defined, we CHOOSE them. This choice affect WHICH matrices we get, and thus "what our dual vectors look like". For example, in the polynomial function space (a subspace of all functions $f:\Bbb R \to \Bbb R$):

$P_2 = \{f|f(t) = a_0 + a_1t + a_2t^2: a_0,a_1,a_2 \in \Bbb R\}$ with the basis (of the functions with images) $\{1,t-1,t^2+t\}$ the dual basis is not made up of the functions which return the constant, linear or quadratic coefficients, but rather this set of functions:

$\{L_1,L_2,L_3: P_2 \to \Bbb R\}$,

where if $f(t) = a_0 + a_1t + a_2t^2$, then:

$L_1(f) = a_0 + a_1 - a_2$,
$L_2(f) = a_1 - a_2$,
$L_3(f) = a_2$.

In the "normal" (usual/standard) basis, the co-vector that is dual to $t$ (the function $f:f(t) = t$) returns a value of $3$ for the polynomial $2 + 3t + t^2$. In our "non-standard" basis, this polynomial has the coefficient-vector $(4,2,1)$, since:

$4 + (2)(t - 1) + (1)(t^2+t) = 4 + 2t - 2 + t^2 + t = (4 - 2) + (2 + 1)t + t^2 = 2 + 3t + t^2$

And so $L_2(2 + 3t + t^2) = 2$ (see, this is a different function-it gives a different "number" from the same polynomial than the standard dual basis vector does).

However, IN THE BASIS $B = \{b_1 = 1,b_2 = t-1,b_3 = t^2+t\}$, we see that:

$L_2(b_2) = L_2(t-1) = 1$, while:

$L_2(b_1) = L_2(1) = 0$, and $L_2(b_3) = L_2(t^2 + t) = 1 - 1 = 0$, and you may verify that indeed:

$L_i(b_j) = \delta_{ij}$,

so in this basis, the $L_i$ have the expected matrices: $\begin{bmatrix}1&0&0\end{bmatrix},\begin{bmatrix}0&1&0\end{bmatrix},\begin{bmatrix}0&0&1\end{bmatrix}$,

whereas in the standard basis we have the matrices for the $L_i$:

$\begin{bmatrix}1&1&-1\end{bmatrix}, \begin{bmatrix}0&1&-1\end{bmatrix},\begin{bmatrix}0&0&1\end{bmatrix}$

Hi Deveno ... I may well be misunderstanding your post ... but it does not look as if you answered my question ... which was as follows:How do we show from first principles that there exists a covector \(\displaystyle f^* \in V^*\) such that

\(\displaystyle f^* ( v_j ) = \delta_{ j_1 j }\) for \(\displaystyle j = 1, \ ... \ ... \ , \ n\) ... ...
Further ... a secondary question has developed ... how exactly do we use this in Lemma 3 given that there appears to be two versions of \(\displaystyle f^*\) ... one is a map

\(\displaystyle f^* \ : \ V \longrightarrow \mathbb{K}\) ... ... that is \(\displaystyle f^* \in V^*\) ... ...and apparently another f^* which is a map

\(\displaystyle f^* \ : \ V \otimes W \longrightarrow W
\)

How do we cope with two (?) functions \(\displaystyle f^*\) ... we seem to use a result about the first \(\displaystyle f^*\) in a computation concerning the second ...

Can you clarify ... ?

Hope you can help further ...

Peter
 
Last edited:
  • #5
Peter said:
Hi Deveno ... I may well be misunderstanding your post ... but it does not look as if you answered my question ... which was as follows:How do we show from first principles that there exists a covector \(\displaystyle f^* \in V^*\) such that

\(\displaystyle f^* ( v_j ) = \delta_{ j_1 j }\) for \(\displaystyle j = 1, \ ... \ ... \ , \ n\) ... ...
Further ... a secondary question has developed ... how exactly do we use this in Lemma 3 given that there appears to be two versions of \(\displaystyle f^*\) ... one is a map

\(\displaystyle f^* \ : \ V \longrightarrow \mathbb{K}\) ... ... that is \(\displaystyle f^* \in V^*\) ... ...and apparently another f^* which is a map

\(\displaystyle f^* \ : \ V \otimes W \longrightarrow W
\)

How do we cope with two (?) functions \(\displaystyle f^*\) ... we seem to use a result about the first \(\displaystyle f^*\) in a computation concerning the second ...

Can you clarify ... ?

Hope you can help further ...

Peter

It's there, right at the beginning. The key is that the $v_k$ are linearly independent.

So basically, we define $v \in V$ as a linear combination of the $v_k$ plus some other vectors, say $u_1,\dots,u_t$ such that:

$\{v_1,\dots,v_n,u_1,\dots,u_t\}$ is a basis for $V$, in which case we have:

$v = c_1v_1 + \cdots + c_nv_n + c_{n+1}u_1 +\cdots + c_{n+t}u_t$.

And we set:

$f^{\ast}(v) = v_{j_i}$ (remember $j_i$ is some FIXED element of $\{1,\dots,n\}$), so we are talking about a "definite" coordinate projection function.

It should be clear that $f^{\ast}(v_j) = 0$ if $j \neq j_i$, and $f^{\ast}(v_{j_i}) = 1$, by the uniqueness of the representation of $v$ as a linear combination of our basis.

We don't really care about what $f^{\ast}(u_k)$ is, for any $k$, because we're not going to use those basis elements, but it should be clear that $f^{\ast}$ sends them all to 0.

I don't have your text, so I don't know how $f^{\ast}$ is defined (I would need to see Lemma 1 to Eq. (1.23)), but it appears as if he is using the definition:

$f^{\ast}(v \otimes w) = f^{\ast}(v)w$ (multiplying $w$ by the scalar $f^{\ast}(v)$).

The proof he gives should convince you that $\Bbb R^n \otimes \Bbb R^m \cong \Bbb R^{mn}$, and we can represent a $2$-tensor (a tensor product with two factors) of these as a MATRIX. "Which" matrix we use to REPRESENT a $2$-tensor, is going to depend on what bases we choose for $V$ and $W$ (often such a matrix is called a set of "structure constants" for the tensor, although that term is also used differently in different contexts). In general, an $k$-fold tensor product can be represented by a $k$-dimensional array, although the specific entries will be basis-dependent, typically.

(Caveat: It is possible to define things like $V^{\ast} \otimes W$, physicists like to distinguish between such things by putting indices "up" or "down" (we talk about covariant, or contravariant indices). To sort of over-simplify, we can regard both vectors (columns) and co-vectors (rows) as $n$-tuples, but in matrix multiplication, rows and columns have different roles. This is much like the left vs. right difficulty that occurs in much of algebra. So, just to be clear, when I say a 2-tensor can be associated with a matrix (which is actually a 1-up, 1-down tensor), I am implicitly invoking an isomorphism between $V$ and $V^{\ast}$. This mostly becomes an issue when one is applying a change-of-basis and one is faced with whether the change-of-basis transformation $P$ or $P^{-1}$ should be applied to one's basis elements. Orthonormal bases help out tremendously, then, because we can just use $P^T$ which makes the process conceptually easier).
 
  • #6
Deveno said:
It's there, right at the beginning. The key is that the $v_k$ are linearly independent.

So basically, we define $v \in V$ as a linear combination of the $v_k$ plus some other vectors, say $u_1,\dots,u_t$ such that:

$\{v_1,\dots,v_n,u_1,\dots,u_t\}$ is a basis for $V$, in which case we have:

$v = c_1v_1 + \cdots + c_nv_n + c_{n+1}u_1 +\cdots + c_{n+t}u_t$.

And we set:

$f^{\ast}(v) = v_{j_i}$ (remember $j_i$ is some FIXED element of $\{1,\dots,n\}$), so we are talking about a "definite" coordinate projection function.

It should be clear that $f^{\ast}(v_j) = 0$ if $j \neq j_i$, and $f^{\ast}(v_{j_i}) = 1$, by the uniqueness of the representation of $v$ as a linear combination of our basis.

We don't really care about what $f^{\ast}(u_k)$ is, for any $k$, because we're not going to use those basis elements, but it should be clear that $f^{\ast}$ sends them all to 0.

I don't have your text, so I don't know how $f^{\ast}$ is defined (I would need to see Lemma 1 to Eq. (1.23)), but it appears as if he is using the definition:

$f^{\ast}(v \otimes w) = f^{\ast}(v)w$ (multiplying $w$ by the scalar $f^{\ast}(v)$).

The proof he gives should convince you that $\Bbb R^n \otimes \Bbb R^m \cong \Bbb R^{mn}$, and we can represent a $2$-tensor (a tensor product with two factors) of these as a MATRIX. "Which" matrix we use to REPRESENT a $2$-tensor, is going to depend on what bases we choose for $V$ and $W$ (often such a matrix is called a set of "structure constants" for the tensor, although that term is also used differently in different contexts). In general, an $k$-fold tensor product can be represented by a $k$-dimensional array, although the specific entries will be basis-dependent, typically.

(Caveat: It is possible to define things like $V^{\ast} \otimes W$, physicists like to distinguish between such things by putting indices "up" or "down" (we talk about covariant, or contravariant indices). To sort of over-simplify, we can regard both vectors (columns) and co-vectors (rows) as $n$-tuples, but in matrix multiplication, rows and columns have different roles. This is much like the left vs. right difficulty that occurs in much of algebra. So, just to be clear, when I say a 2-tensor can be associated with a matrix (which is actually a 1-up, 1-down tensor), I am implicitly invoking an isomorphism between $V$ and $V^{\ast}$. This mostly becomes an issue when one is applying a change-of-basis and one is faced with whether the change-of-basis transformation $P$ or $P^{-1}$ should be applied to one's basis elements. Orthonormal bases help out tremendously, then, because we can just use $P^T$ which makes the process conceptually easier).
Hi Deveno,

Thanks for the help ... but ... I need you to clarify a couple of points ...You write:
" ... ... It's there, right at the beginning. The key is that the $v_k$ are linearly independent.

So basically, we define $v \in V$ as a linear combination of the $v_k$ plus some other vectors, say $u_1,\dots,u_t$ such that:

$\{v_1,\dots,v_n,u_1,\dots,u_t\}$ is a basis for $V$, in which case we have:

$v = c_1v_1 + \cdots + c_nv_n + c_{n+1}u_1 +\cdots + c_{n+t}u_t$. ... ... "

In the above you talk of a basis for V being $\{v_1,\dots,v_n,u_1,\dots,u_t\}$ ... ... .... ... BUT ... the statement of Lemma 3 is as follows:
View attachment 5366The statement of Lemma 3 seems to me to imply that \(\displaystyle \{ v_1, \ ... \ ... \ , \ v_m \}\) is a linearly independent set in \(\displaystyle V\) and that \(\displaystyle \{ u_1, \ ... \ ... \ , \ u_n \}\) is a linearly independent set in \(\displaystyle W\) ... ...

Is that right?

If it is how can we have $\{v_1,\dots,v_n,u_1,\dots,u_t\}$ as a basis for $V$?
=====================================================***NOTE 1 ***


In fact, the more I look at it I think the author meant to write the following at the start of the statement of Lemma 3:

"Lemma 3: If \(\displaystyle \{ v_1, \ ... \ ... \ , \ v_m \}\) and \(\displaystyle \{ w_1, \ ... \ ... \ , \ w_n \}\) are two linearly independent sets in their respective spaces ... ... ... "=====================================================

*** NOTE 2 ***

You mentioned that you needed to be able to read Lemma 1 ... I am therefore providing Section 1.7.3 in total ... it contains Lemmas 1 to 3 ... and also contains the definition of \(\displaystyle f^*\) ... ... ... it reads as follows:https://www.physicsforums.com/attachments/5367
View attachment 5368
View attachment 5369=====================================================

*** NOTE 3 ***In Section 1.7.3 we read the following in Lemma 2 (see the text from Winitzki directly above) ... ... :" ... ... If \(\displaystyle f^*\) is a covector, we define the map

\(\displaystyle f^* \ : \ V \otimes W \longrightarrow W \)

(tensors into vectors) by the formula

\(\displaystyle f^* ( \sum_k v_k \otimes w_k ) \equiv \sum_k f^* ( v_k ) w_k\) ... ... ... (1.21)

... ... ... ... "
BUT ... as the above definition indicates \(\displaystyle f^*\) is already defined as a covector ... ... that is \(\displaystyle f^*\) is defined as belonging to the space \(\displaystyle V^*\) ... which means \(\displaystyle f^* \ : \ V \longrightarrow \mathbb{K}\) ... so if we are talking about the same map \(\displaystyle f^*\) ... how can it also be a map \(\displaystyle f^* \ : \ V \otimes W \longrightarrow W \)... ?

I am providing the text where Winitzki first defines \(\displaystyle f^*\) as a covector ... ... as follows ...View attachment 5371
View attachment 5372

Peter
 
Last edited:
  • #7
Any subset of a linearly independent set is linearly independent.

Any basis is a linearly independent set.

Any linearly independent set can be extended to a basis (in a non-unique way, generally, but at least ONE such way always exists).

Read the lemma again-he DOES say that.

For any basis $\{b_1,\dots,b_k\}$ of a vector space $V$, there exists linear functionals $f_j^{\ast}$ such that:

$f_j^{\ast}(b_i) = \delta_{ij}$ for each $j$-namely the linear functional $(b_j)^{\ast}$. This remains true if we consider some subset:

$\{b_{i_1},\dots,b_{i_r}\}$ (if $j \not\in \{i_1,\dots,i_r\}$ the 0-functional will do, but we're only interested in the linear functional that picks out one basis vector to preserve, and kills the rest).

Look a linear functional is just a linear map with a rank of 1. To create a linear functional that sends $b_j \to 1$ and $b_i \to 0$ (for $i \neq j$) is very easy, just zero-out every coefficient of all the basis coefficients in a linear combination except for the coefficient of $b_j$, and send that to itself.

Of course, to DO that, we need to know what the coefficients in a linear combination of the basis elements ARE, which is why we need a basis to define a dual basis.
 
  • #8
Deveno said:
Any subset of a linearly independent set is linearly independent.

Any basis is a linearly independent set.

Any linearly independent set can be extended to a basis (in a non-unique way, generally, but at least ONE such way always exists).

Read the lemma again-he DOES say that.

For any basis $\{b_1,\dots,b_k\}$ of a vector space $V$, there exists linear functionals $f_j^{\ast}$ such that:

$f_j^{\ast}(b_i) = \delta_{ij}$ for each $j$-namely the linear functional $(b_j)^{\ast}$. This remains true if we consider some subset:

$\{b_{i_1},\dots,b_{i_r}\}$ (if $j \not\in \{i_1,\dots,i_r\}$ the 0-functional will do, but we're only interested in the linear functional that picks out one basis vector to preserve, and kills the rest).

Look a linear functional is just a linear map with a rank of 1. To create a linear functional that sends $b_j \to 1$ and $b_i \to 0$ (for $i \neq j$) is very easy, just zero-out every coefficient of all the basis coefficients in a linear combination except for the coefficient of $b_j$, and send that to itself.

Of course, to DO that, we need to know what the coefficients in a linear combination of the basis elements ARE, which is why we need a basis to define a dual basis.
Hi Deveno ...Thanks again ... but I hope that without trying your patience I can ask a further question ...

... ... the issue/question is as follows ... ... In Section 1.7.3 we read the following in Lemma 2 (see the text from Winitzki directly above) ... ... :" ... ... If \(\displaystyle f^*\) is a covector, we define the map

\(\displaystyle f^* \ : \ V \otimes W \longrightarrow W \)

(tensors into vectors) by the formula

\(\displaystyle f^* ( \sum_k v_k \otimes w_k ) \equiv \sum_k f^* ( v_k ) w_k\) ... ... ... (1.21)

... ... ... ... "
BUT ... as the above definition indicates \(\displaystyle f^*\) is already defined as a covector ... ... that is \(\displaystyle f^*\) is defined as belonging to the space \(\displaystyle V^*\) ... which means \(\displaystyle f^* \ : \ V \longrightarrow \mathbb{K}\) ... so if we are talking about the same map \(\displaystyle f^*\) ... how can it also be a map \(\displaystyle f^* \ : \ V \otimes W \longrightarrow W \)... ?


I am providing the text where Winitzki first defines \(\displaystyle f^*\) as a covector ... ... as follows ...View attachment 5373
View attachment 5374
 
  • #9
You are correct, it is not the same $f^{\ast}$, it is an example of what is called an "induced" map (a map determined by another map, and often given the same "name", because it is uniquely determined by the original map).

The "second" $f^{\ast}$ maps:

$v \otimes w \mapsto f^{\ast}(v)w$.

The reason why we also call the induced map $f^{\ast}$, is that the map $B:V \times W \to W$:

$B(v,w) = f^{\ast}(v)w$ is bilinear in $v,w$, and so factors through the tensor product as a linear map $V \otimes W \to W$.

By linearity, we can extend the "induced" $f^{\ast}$ to a general element of $V \otimes W$ from considering its action on "elementary" or "simple" tensors of the form $v_i \otimes w_j$, where these are basis elements of $V$ and $W$.

Personally, I feel the bilinear map above, should be something like $B_{f^{\ast}}$, and the induced map something like $\otimes B_{f^{\ast}}$, but one often encounters situations where one wants to keep the notation from making things unclear just by its complexity.
 
  • #10
Deveno said:
You are correct, it is not the same $f^{\ast}$, it is an example of what is called an "induced" map (a map determined by another map, and often given the same "name", because it is uniquely determined by the original map).

The "second" $f^{\ast}$ maps:

$v \otimes w \mapsto f^{\ast}(v)w$.

The reason why we also call the induced map $f^{\ast}$, is that the map $B:V \times W \to W$:

$B(v,w) = f^{\ast}(v)w$ is bilinear in $v,w$, and so factors through the tensor product as a linear map $V \otimes W \to W$.

By linearity, we can extend the "induced" $f^{\ast}$ to a general element of $V \otimes W$ from considering its action on "elementary" or "simple" tensors of the form $v_i \otimes w_j$, where these are basis elements of $V$ and $W$.

Personally, I feel the bilinear map above, should be something like $B_{f^{\ast}}$, and the induced map something like $\otimes B_{f^{\ast}}$, but one often encounters situations where one wants to keep the notation from making things unclear just by its complexity.

Hi Deveno,

Thanks for the help ... you certainly clarified that situation ... which was worrying me ...

Could you clarify the following worry ... ...

There seem to me to be two mappings involved here ...

... one is a map

\(\displaystyle f^* \ : \ V \longrightarrow \mathbb{K}\) ... ... that is \(\displaystyle f^* \in V^*\) ... ...and one other bilinear map ...

\(\displaystyle f^* \ : \ V \otimes W \longrightarrow W
\)BUT ...

... you mention two bilinear maps \(\displaystyle B_{f^*}\) and \(\displaystyle \otimes B_{f^*}\) ... ...

Can you identify the two bilinear maps ... as it seems to me there is only \(\displaystyle f^*\) (which is just a covector and not bilinear) and then the induced map which is bilinear ...

Can you help to clarify this issue ...... note that I am pleased that Winitzki, although not being careful with his notation is within the guidelines of accepted practice .. ...

Peter
 
  • #11
Peter said:
Hi Deveno,

Thanks for the help ... you certainly clarified that situation ... which was worrying me ...

Could you clarify the following worry ... ...

There seem to me to be two mappings involved here ...

... one is a map

\(\displaystyle f^* \ : \ V \longrightarrow \mathbb{K}\) ... ... that is \(\displaystyle f^* \in V^*\) ... ...and one other bilinear map ...

\(\displaystyle f^* \ : \ V \otimes W \longrightarrow W
\)BUT ...

... you mention two bilinear maps \(\displaystyle B_{f^*}\) and \(\displaystyle \otimes B_{f^*}\) ... ...

Can you identify the two bilinear maps ... as it seems to me there is only \(\displaystyle f^*\) (which is just a covector and not bilinear) and then the induced map which is bilinear ...

Can you help to clarify this issue ...... note that I am pleased that Winitzki, although not being careful with his notation is within the guidelines of accepted practice .. ...

Peter

I have two maps, $B_{f^{\ast}}$ and $\otimes B_{f^{\ast}}$. The first is bilinear, the second is linear. Both of Winitzki's $f^{\ast}$ maps are linear maps, neither is bilinear.

What the tensor product essentially does is create a vector space from $V \times W$ (which has dimension $\dim(V) + \dim(W)$) that turns bilinear maps into linear maps. This process is calling "factoring through the tensor product".
 
  • #12
Deveno said:
I have two maps, $B_{f^{\ast}}$ and $\otimes B_{f^{\ast}}$. The first is bilinear, the second is linear. Both of Winitzki's $f^{\ast}$ maps are linear maps, neither is bilinear.

What the tensor product essentially does is create a vector space from $V \times W$ (which has dimension $\dim(V) + \dim(W)$) that turns bilinear maps into linear maps. This process is calling "factoring through the tensor product".
So ... trying to follow ... :confused: ... sorry to be slow ... but I want to be sure ...

Do we have

\(\displaystyle B_{f^{\ast}} \ : \ V \longrightarrow \mathbb{K}\)and \(\displaystyle \otimes B_{f^{\ast}} \ : \ V \otimes W \longrightarrow W\)Is that right?

(somehow I do not think it is ... apologies for taking a while t get it right ...)Peter

EDIT ... Can see that what I've written is not correct ... but what is ?
 
Last edited:
  • #13
Deveno said:
You are correct, it is not the same $f^{\ast}$, it is an example of what is called an "induced" map (a map determined by another map, and often given the same "name", because it is uniquely determined by the original map).

The "second" $f^{\ast}$ maps:

$v \otimes w \mapsto f^{\ast}(v)w$.

The reason why we also call the induced map $f^{\ast}$, is that the map $B:V \times W \to W$:

$B(v,w) = f^{\ast}(v)w$ is bilinear in $v,w$, and so factors through the tensor product as a linear map $V \otimes W \to W$.

By linearity, we can extend the "induced" $f^{\ast}$ to a general element of $V \otimes W$ from considering its action on "elementary" or "simple" tensors of the form $v_i \otimes w_j$, where these are basis elements of $V$ and $W$.

Personally, I feel the bilinear map above, should be something like $B_{f^{\ast}}$, and the induced map something like $\otimes B_{f^{\ast}}$, but one often encounters situations where one wants to keep the notation from making things unclear just by its complexity.
Hi Deveno,

I think one of the reasons that I am ending up very confused is that I have little to no understanding of your statement ... ...

" ... ... The reason why we also call the induced map $f^{\ast}$, is that the map $B:V \times W \to W$:

$B(v,w) = f^{\ast}(v)w$ is bilinear in $v,w$, and so factors through the tensor product as a linear map $V \otimes W \to W$. ... ... "

Are you able to explain in simple terms ...?

Peter
 
Last edited:
  • #14
Peter said:
Hi Deveno,

I think one of the reasons that I am ending up very confused is that I have little to no understanding of your statement ... ...

" ... ... The reason why we also call the induced map $f^{\ast}$, is that the map $B:V \times W \to W$:

$B(v,w) = f^{\ast}(v)w$ is bilinear in $v,w$, and so factors through the tensor product as a linear map $V \otimes W \to W$. ... ... "

Are you able to explain in simple terms ...?

Peter

That statement is simply the diagram you drew in one of your other threads (we say a map $f: A \to B$ factors through $h: A \to C$ if there is some map $g:C \to B$ such that $f = g\circ h$ -often "$C$" is some special object that makes this possible).
 

FAQ: Does Lemma 3 in Winitzki's Tensor Product Dimension Section Clarify Covectors?

What is the significance of Lemma 3 in Winitzki's work on Tensors?

Lemma 3 in Winitzki's work on Tensors is a fundamental result that helps to establish the properties of tensors, particularly in the context of multilinear algebra. It is an important building block for understanding the behavior of tensors and their transformation laws.

How does Lemma 3 differ from other lemmas in Winitzki's work?

While other lemmas in Winitzki's work may focus on specific properties or applications of tensors, Lemma 3 is a general result that provides a basis for understanding the overall behavior of tensors. It is a key tool for proving more complex theorems and propositions related to tensors.

Is Lemma 3 applicable to all types of tensors?

Yes, Lemma 3 is applicable to all types of tensors, including higher-order tensors and tensors with different ranks and dimensions. This is because it is a general result that is based on the fundamental properties of tensors.

Can Lemma 3 be used to prove other important theorems in tensor analysis?

Yes, Lemma 3 is often used as a starting point for proving more complex theorems in tensor analysis. Its general nature makes it a useful tool for establishing the properties and behaviors of tensors in various contexts.

How can Lemma 3 be visualized or explained intuitively?

Lemma 3 can be visualized as a general rule or principle that governs the behavior of tensors. Intuitively, it can be thought of as a way to understand how tensors transform when they are subjected to different operations or transformations.

Back
Top