Confused about the inner product

  • #36
PeroK said:
However, in more general vector spaces, this does not always hold.
And here comes the next confusing point. But I don't think that I have to go into this at this stage.
 
Physics news on Phys.org
  • #37
Rick16 said:
So in Fleisch's presentation we have two different bases of the same vector space, i.e. vector and covector both belong to the same vector space. What do covectors look like that do not belong to the same vector space?

The vector spaces ##V## and ##\tilde{V}=V^*## are formally not the same. One consists of arrows, and the other one consists of linear functions. In the case of finite dimension ##n## they are isomorphic. I wrote it as
$$
V\cong \mathbb{R}^n\cong \tilde{V}=V^* \quad {(*)}.
$$
This means that both - arrows and functions - can be described by ##n##-tuples of real numbers. That doesn't make them the same object, but it makes them have the same sort of representation, namely by ##n## real coordinates. It also allows us (in the cases ##n=0,1,2,3##) to draw them on the same sheet of paper: the origin of this entire discussion here and my criticism of Fleisch's treatment. Fleisch interprets those linear functions as the lengths of their projections on those arrows, i.e. a linear function (projection) that eats arrows and spits out a length, a real number.

Those isomorphisms in ##{(*)}## are not natural. They depend (as in our case here) on the choice of the metric ##g.## This makes them sort of "artificial". Nevertheless, you can make this geometric interpretation at the expense of confusing two initially different concepts: arrows and linear functions.
$$
v\stackrel{1:1}{\longleftrightarrow } \left(w\stackrel{\tilde{v}}{\longmapsto} \bigl\langle w,v \bigr\rangle =w\cdot v =g(w,v)=w^\tau\cdot v \right)
$$
You can see that this linear function on the right (eating an arrow ##w## and spitting out the real number ##g(w,v)##) can be written in many ways ##(\tilde{v}=v^*\, , \,g(w,v)\, , \,w\cdot v\, , \,w^\tau \cdot v\, , \,\bigl\langle w,v \bigr\rangle )## adding a notational mess to what has already been confused.

The best advice is therefore:

martinbn said:
Either stick with this book and not worry about these things or try another.

Here is a text that explains these "dualities" a bit better:
https://arxiv.org/pdf/1205.5935
but it's not an easy read in my opinion.
 
  • #38
Rick16 said:
And here comes the next confusing point. But I don't think that I have to go into this at this stage.
That's easy. If the vector space consists e.g. of sequences as vectors, then it isn't usually of finite dimension any longer. This means we cannot express them as ##n##-tuples of real numbers anymore. And then the identification (isomorphisms) ##V\cong \mathbb{R}^n\cong \tilde{V}## breaks down.

Vector spaces in general are a set of objects that allow addition, stretching, and compressing. The vectors do not always have to be arrows. Functions, sequences, series etc. can all also be added and stretched.
 
  • #39
One last remark: My confusion was obviously caused by the fact that I interpreted ##\vec A=A^i \vec e_i = A_i \vec e^i## from Fleisch as meaning ##\vec A=\tilde A##, but Fleisch never writes ##A_i \vec e^i=\tilde A##. He writes that ##A_i\vec e^i## are the covariant components of vector ##\vec A##. Later he writes "The first thing to understand is that the traditional approach tends to treat contravariant and covariant components as representations of the same object, whereas in the modern approach objects are classified either as "vectors" or as "one-forms" (also called "covectors"). In the modern terminoloy, vectors transform as contravariant quantities, and one-forms transform as covariant quantities." He never writes that vectors with covariant components are the same as one-forms. One must really read these texts very closely. I am sorry that I took up so much of everybody's time with this.
 
  • Like
Likes PeroK and mathwonk
  • #40
[I have not read today's posts, and am following up on your question as to why I did not understand the equation A = ATilda.( ).]

Forgive me, I seem to have misunderstood your notation. I want to reassure you that your instincts are excellent, and that, once we agree what the notation means, there is indeed an interpretation in which the equation A = ATilda.( ) is true and meaningful, namely as follows ( where I will write A* instead of ATilda): So I want to justify the claim that indeed, properly understood, “A = A*.( )”.

Le V be a vector space with a real-valued inner product (positive, symmetric, bilinear function of two variables) denoted by A.B for A,B in V. Let V* denote the vector space of real valued linear functions on V, and if A is in V, write A* for the element of V* whose value at B in V is A.B.

Then we can write the function A* as A* = A*( ) = A.( ).

If V is finite dimensional, this correspondence, taking A to A*, is an isomorphism from V to V*, (because the inner product is assumed positive definite), and we can use it to carry the inner product from V over to V*. Namely given elements f,g in V*, then f = A*, and g = B*, for some unique elements A,B of V, and we can define f.g = A*.B* = A.B.

Note there is always a natural “evaluation” pairing between V and V*, not dependent on the inner product, taking the function F = F( ) in V* and the vector B in V, to the value F(B), of F at B.

Thus if A is a vector in V, then A* = A*( ) denotes the linear function on V whose value at B is A*(B) = A.B.

With our definitions above, then A*.( ) denotes however the linear function on V* whose value at B* is A*.B* = A.B.

Now let V** = (V*)* denote the vector space of all real valued linear functions on V*. Thus the function A*.( ) belongs to V**, (whereas A*( ) belongs to V*).

Then for each A in V, we can define the “evaluation” function on V* whose value at f is f(A), and we denote this linear function on V*, as A** = evaluation at A. In particular, A**(B*) = B*(A) = B.A = A.B. I.e. A** is the unique linear function on V* whose value at B* is A.B.

Since A*.( ) is also a linear function on V* whose value at B* is A*.(B*) = A*.B* = A.B, then A** and A*.( ) are the same function on V*. Hence. A*.( ) = A**, i.e. they denote the same element of V**.

Finally, since the mapping from V to V** taking A to A** is an isomorphism when V is finite dimensional, and does not depend on any choice of inner product, we may identify V naturally with V**, and under this identification, indeed the element A*.( ) of V** corresponds to A.

In this sense, “A = A*.( )”, i.e. although the two sides are in different spaces, they do correspond under the natural isomorphism between those spaces. (Note however the fact they define the same function depended on the symmetry of the inner product.)
 
Last edited:
  • #41
And one final last remark: Bernacchi writes: "Even a covector can then be expressed as an expansion on its own basis: ##\tilde P=\tilde e^\alpha P_\alpha##". So Bernacchi's expansion of a covector looks the same as Fleisch's expansion of a vector with covariant components. Which further added to my confusion.
 
  • #42
Rick16 said:
And one final last remark: Bernacchi writes: "Even a covector can then be expressed as an expansion on its own basis: ##\tilde P=\tilde e^\alpha P_\alpha##". So Bernacchi's expansion of a covector looks the same as Fleisch's expansion of a vector with covariant components. Which further added to my confusion.
See my post #37.

The origin of the confusion is that a vector is ##v=(v_1,\ldots,v_n)## and ##\tilde{v}= \operatorname{ times }\;(v_1,\ldots,v_n).##
 
  • #43
fresh_42 said:
See my post #37.

The origin of the confusion is that a vector is ##v=(v_1,\ldots,v_n)## and ##\tilde{v}= \operatorname{ times }\;(v_1,\ldots,v_n).##
Which means that a vector with covariant components is not the same as a covector, doesn't it?
 
  • #44
I think dealing with a lot of this just boils down to: when someone says something equals A, ask your self: "well, maybe he means A*, or A**; would it make sense then?"
 
  • #45
mathwonk said:
[I have not read today's posts, and am following up on your question as to why I did not understand the equation A = ATilda.( ).]

Forgive me, I seem to have misunderstood your notation. I want to reassure you that your instincts are excellent, and that, once we agree what the notation means, there is indeed an interpretation in which the equation A = ATilda.( ) is true and meaningful, namely as follows ( where I will write A* instead of ATilda): So I want to justify the claim that indeed, properly understood, “A = A*.( )”.

Le V be a vector space with a real-valued inner product (positive, symmetric, bilinear function of two variables) denoted by A.B for A,B in V. Let V* denote the vector space of real valued linear functions on V, and if A is in V, write A* for the element of V* whose value at B in V is A.B.

Then we can write the function A* as A* = A*( ) = A.( ).

If V is finite dimensional, this correspondence, taking A to A*, is an isomorphism from V to V*, (because the inner product is assumed positive definite), and we can use it to carry the inner product from V over to V*. Namely given elements f,g in V*, then f = A*, and g = B*, for some unique elements A,B of V, and we can define f.g = A*.B* = A.B.

Note there is always a natural “evaluation” pairing between V and V*, not dependent on the inner product, taking the function F = F( ) in V* and the vector B in V, to the value F(B), of F at B.

Thus if A is a vector in V, then A* = A*( ) denotes the linear function on V whose value at B is A*(B) = A.B.

With our definitions above, then A*.( ) denotes however the linear function on V* whose value at B* is A*.B* = A.B.

Now let V** = (V*)* denote the vector space of all real valued linear functions on V*. Thus the function A*.( ) belongs to V**, (whereas A*( ) belongs to V*).

Then for each A in V, we can define the “evaluation” function on V* whose value at f is f(A), and we denote this linear function on V*, as A** = evaluation at A. In particular, A**(B*) = B*(A) = B.A = A.B. I.e. A** is the unique linear function on V* whose value at B* is A.B.

Since A*.( ) is also a linear function on V* whose value at B* is A*.(B*) = A*.B* = A.B, then A** and A*.( ) are the same function on V*. Hence. A*.( ) = A**, i.e. they denote the same element of V**.

Finally, since the mapping from V to V** taking A to A** is an isomorphism when V is finite dimensional, and does not depend on any choice of inner product, we may identify V naturally with V**, and under this identification, indeed the element A*.( ) of V** corresponds to A.

In this sense, “A = A*.( )”, i.e. although the two sides are in different spaces, they do correspond under the natural isomorphism between those spaces. (Note however the fact they define the same function depended on the symmetry of the inner product.)
I am glad that you acknowledge this equation. I am new to this whole subject, but my acquaintance with Fleisch's presentation is a little older than the functional approach. I learned about the functional approach from Bernacchi, and Bernacchi takes a "symmetric" approach, i.e. he treats covectors as functions of vectors, and vectors as functions of covectors. I always wondered why you only considered covectors as functions, and vectors just as vectors, which seems like an "asymmetric" approach. I have since seen other posts on this thread with the same presentation like yours, and Schutz seems to do it this way, too. This is apparently something that I have to get acquainted with. In Bernacchi's approach it seems that vectors and covectors are objects on an equal footing, but it seems that I have to get away from this idea and start seeing them as different objects.
 
  • #46
Rick16 said:
Which means that a vector with covariant components is not the same as a covector, doesn't it?
I'm not particularly familiar with that co-contra-variant babble. It means something completely different in mathematics than physicists use it for. I like to speak of vectors (like tangents) and their coordinates, and dual vectors (co-vectors, like cotangents) and their coordinates. A component of a vector is a coordinate is a number. What should a covariant number be? We have only one kind of numbers.
 
  • Like
Likes jbergman
  • #47
fresh_42 said:
What should a covariant number be?
I hope you are not asking me? I am just trying to learn this.
 
  • #48
mathwonk said:
I think dealing with a lot of this just boils down to: when someone says something equals A, ask your self: "well, maybe he means A*, or A**; would it make sense then?"
This could make sense in a certain way, but in another way it could also confuse things further.
 
  • #49
well this seems to be what you are asking me to do, if I agree to say that
A*.( ) equals A instead of A**.
 
  • #50
Ok I have found Bernacchi and read pages 1-11. In these pages he is discussing only V and V*. There is no inner product on V, and he calls the natural evaluation pairing VxV*-->R, taking v,f to f(v), the "heterogeneous inner product".

There is at first no way to identify V with V*, so vectors cannot yet be considered as covectors. Then he chooses a basis e1,...,en for V, which gives a way to identify V with coordinate space R^n, by associating to each vector in V, its coordinates in R^n.

But then there is always a unique associated "dual" basis for V*, namely the basis of functions f1,...,fn in V* such that fi(ej) = 1 iff I=j, and = 0 otherwise. Then using this basis for V*, the space V* can also be identified with R^n, using the coordinates in terms of this basis to represent functions in V*.

Since both V and V* are identified with R^n by the choice of a basis of V, this choice of basis gives a way to identify V with V*, i.e. to view vectors as covectors. Namely if A is a vector with coordinates a^1,...,a^n, and B is another vector with coordinates b^1,...,b^n, we view A as the covector whose value at B is a^1b^1+...+a^nb^n.

What is going on here is that R^n has a standard inner product, and once the basis is used to identify V with R^n, we can transfer that inner product to V. Then with this inner product on V, call it <A,B>, we are just getting the familiar identification of V with V*, via A corresponds to <A, >. This is the unique inner product on V for which the chosen basis e1,...,en is orthonormal.

Later however, when he starts over with V, with no basis, and introduces first a "homogeneous" inner product g( , ) for V, and then chooses a basis u1,...,un for V that is not orthonormal for g( , ), we will have two different ways to identify V with V*, and hence two different ways to identify V with R^n, one by means of the basis, as above, and another by means of the inner product g.

I.e. the basis u1,...,un for V defines a dual basis f1,...,fn for V* such that fi(uj) = kronecker delta ij. Using the inner product g, we can transfer this basis to a basis for V, by choosing u^j to be the unique vector in V such that for all B in V, we have g(u^j,B) = fj(B). If we write a "dot" for g, we have u^j.B = fj(B).

Now if A is any vector in V, we get an element A* of V* where A* is the function A*(B) = A.B. In terms of the dual basis f1,...,fn for V*, this A* has an expansion as A* = c1f1+...+cnfn, for some c1,...,cn.

Now that we have two bases for V, namely {uj} and {u^j}, each vector A has two expansions, A = a^1u1+...+a^nun = a1u^1+...+anu^n.

I claim aj = cj for all j. I.e. the coefficients cj of A* in terms of the natural "dual basis" {fj}, are equal to the coefficients aj of A in terms of the basis {u^j} for V. (This is why Fleisch calls {u^j} the dual basis for V.)

E.g. note that A.u1 = (a1u^1+...+anu^n).u1 = a1, by definition of the u^j. But also A.u1 = A*(u1) = (c1f1+...+cnfn)(u1) = c1, by definition off the fj. Thus a1 = A.u1 = c1.

Similarly, aj = cj, for all j.

So, fuzzy as this sounds, Fleisch seems to be saying he wants to identify V and its inner product with R^n and the standard inner product. But if he does this, then a vector A has to be identified with the coordinate vector (a^1,...,a^n) when thought of as a vector, but with the coordinate vector (a1,...,an) when thought of as a covector........???? yoicks. (As the unpicky rat in Disney's "Ratatouille" said, when the gourmet rat asked him what he was eating, I don't really know.)
 
  • #51
mathwonk said:
well this seems to be what you are asking me to do, if I agree to say that
A*.( ) equals A instead of A**.
Yes, sorry, I did not realize that #44 was connected to #40, because there was another post between them, and I did not understand #44 correctly. Actually, I would not even have thought that this could be an issue. If V* is the dual vector space of V, I would automatically assume that V is the dual vector space of V*. The word "dual" seems to imply that there are two of them, and that there is reciprocity between them. But as usual, things are apparently not as easy as an amateur believes them to be.
 
  • #52
mathwonk said:
So, fuzzy as this sounds, Fleisch seems to be saying he wants to identify V and its inner product with R^n and the standard inner product. But if he does this, then a vector A has to be identified with the coordinate vector (a^1,...,a^n) when thought of as a vector, but with the coordinate vector (a1,...,an) when thought of as a covector........???? yoicks. (As the unpicky rat in Ratatouille said when the gourmet rat asked what he was eating, I don't really know.)
Do you really mean Fleisch now, or are you still talking about Bernacchi?
 
  • #53
I am sure you can guess my answer to that: i.e. I meant what I said.
I do not have a copy of Bernacchi and have not read past page 11 online, since I had to wait 20 seconds for every page to load. So I have not reached Bernacchi's discussion of what happens with an arbitrary inner product on a space. But Bernacchi freely discusses the space V* in pages 1-11, as different from the space V, which Fleisch does not.

In post#50, right before the words: "So, fuzzy as this sounds", insert the words: "Now what does this tell us about the discussion in Fleisch, where V* and its natural dual basis {fj} do not appear, but the (inner product dependent) 'dual basis' {u^j} for V does appear?"
 
Last edited:
  • #54
I want to recapitulate what I have understood so far. At the heart of the matter are expressions of the type ##V_i\vec e^i##. Here is one more time Fleisch's equation from page 133: ##\vec A=A^i\vec e_i=A_i\vec e^i##. This equation means that ##A^i\vec e_i## represents vector A in terms of contravariant components and basis vectors, and ##A_i\vec e^i## represents vector A in terms of covariant components and dual basis vectors.

And here is equation 1.5 from Bernacchi (page 7), showing the expansion of a covector: ##\tilde P=\tilde e^\alpha P_\alpha##. The expressions ##A_i\vec e^i## and ##\tilde e^\alpha P_\alpha## use slightly different notational conventions and different variables, but they are still the same expression. From this fact--that these expressions are the same--I deduced that ##\vec V=\tilde V##. But ##\vec V\neq \tilde V##. Therefore ##A_i\vec e^i## and ##\tilde e^\alpha P_\alpha## can not mean the same thing, although they look the same.

Where is the difference between these two expressions? ##\tilde e^\alpha P_\alpha## represents a covector, and therefore the dual basis vector ##\tilde e^\alpha## should logically belong to the dual vector space V*. ##A_i\vec e^i##, on the other hand, represents a vector and therefore the dual basis vector ##\vec e^i## should logically belong to the vector space V.

You already spent a lot of time explaining this back in post #12, and here is an important quote from this:
mathwonk said:
In particular the elements e^j of the "dual basis", are not covectors, nor are they a basis of the dual space. Rather given a basis e1,...,en for the space V, with "dual basis" e^1,...,e^n, the operators e^1.( ), ..., e^n( ), i.e. the corresponding covectors, give the corresponding good basis of the dual space V*.
Do I understand correctly that the elements e^j from the above quote correspond to Fleisch's dual basis vectors ##\vec e^i## and that the operators e^j.( ) correspond to Bernacchi's dual basis vectors ##\tilde e^\alpha##?

The difficulty for me in understanding this lies in the fact that you treat a covector as a function and you treat a vector as not a function. This is apparently how it is generally done, as other posts on this thread use the same approach. But I am not familiar with this approach. My two major sources are Fleisch and Bernacchi. Fleisch does not use the functional approach at all and only mentions it in a note at the end. Bernacchi treats everything as functions, covectors and vectors alike. I must find a text that explains the approach with covectors as functions in order to see clearer in all this. Can you for now tell me if my conclusion from above goes in the right direction, i.e. that your elements e^j correspond to Fleisch's dual basis vectors ##\vec e^i## and that the operators e^j.( ) correspond to Bernacchi's dual basis vectors ##\tilde e^\alpha##?
 
  • #55
Rick16 said:
The difficulty for me in understanding this lies in the fact that you treat a covector as a function and you treat a vector as not a function. This is apparently how it is generally done, as other posts on this thread use the same approach.
That seems a natural approach to me. We start with a vector space with an inner product and we define the dual space, which is also an inner product space. We can then treat the dual space as our vector space and define the dual space of that. In the cases of interest here, the dual of the dual is isomorphic to the original vector space. And you have complete symmetry.

At that point, you really need to move on with the physics. This has become a serious mental block for you, where you're spending a lot of your limited time exhausting a subject that is tangential to the actual physics.

You need to find a way to move on from this.
 
  • Like
Likes fresh_42
  • #56
PeroK said:
You need to find a way to move on from this.
Yes, I will try to move on. I have actually ordered Hartle's book based on your recommendation, but while waiting for it, I thought I could still spend some time pondering this question.
 
  • Like
Likes PeroK
  • #57
Rick16 said:
I want to recapitulate what I have understood so far. At the heart of the matter are expressions of the type ##V_i\vec e^i##. Here is one more time Fleisch's equation from page 133: ##\vec A=A^i\vec e_i=A_i\vec e^i##. This equation means that ##A^i\vec e_i## represents vector A in terms of contravariant components and basis vectors, and ##A_i\vec e^i## represents vector A in terms of covariant components and dual basis vectors.
That equation is wrong in my opinion. It should read ##\vec A=A^i\vec e_i \cong \tilde A =A_i\vec e^i##, where ##\cong## means isomorphic not equal and the isomorphism is given by ##g(\vec A) = \tilde A##.

As far as the functional approach versus the component approach they are essentially the same if you know enough linear algebra to make the connections. This is due to the fact that a linear function is completely determined by its values on a basis of a vector space. As a others have said, you should probably move on as to fully understand this you probably need a text that compares all the perspectives or you could read @fresh_42 insight articles on tensors.
 
  • #58
Everything depends on definitions. In Fleisch's book, the symbol e^j is defined as a vector in V, such that ui.u^j = 1 iff i=j and 0, otherwise. To me this makes his equation A = Aje^j correct, i.e. consistent with his (possibly non standard) definitions.
In other books apparently the same symbol e^j denotes instead a covector in V* whose value at ei equals 1 iff i=j and 0 otherwise. With this other definition, indeed ATilda = Aje^j, but with Fleisch's definition, I agree that A = Aje^j (in V), and ATilda = A.( ) = Aj(e^j.( )) (in V*), [although AFAIK Fleisch does not mention V* and hence does not write this last equation].

I also agree as to the wisdom of moving on, and will try to.
 
Last edited:
  • Like
Likes jbergman
  • #59
mathwonk said:
Everything depends on definitions. In Fleisch's book, the symbol e^j is defined as a vector in V, such that ui.u^j = 1 iff i=j and 0, otherwise. To me this makes his equation A = Aje^j correct, i.e. consistent with his (possibly non standard) definitions.
In other books apparently the same symbol e^j denotes instead a covector in V* whose value at ei equals 1 iff i=j and 0 otherwise. With this other definition, indeed ATilda = Aje^j, but with Fleisch's definition, I agree that A = Aje^j (in V), and ATilda = A.( ) = Aj(e^j.( )) (in V*), [although AFAIK Fleisch does not mention V* and hence does not write this last equation].

I also agree as to the wisdom of moving on, and will try to.
I looked briefly at his chapter 4 on covariant and contravariant tensors and you appear to be correct. I would say that his treatment is far from the standard and would recommend something by Schutz to for a more standard presentation or maybe even MTW if you are adventurous.

There are definitely some nuggets in there and it is an interesting geometric explanation of dual vectors, but his conception of dual vectors is not what we typically describe as a covector. You can get from one to the other using the isomorphisms from the inner product but he doesn't mention them or at least the brief part I read, didn't.
 
  • Like
Likes PeroK and fresh_42

Similar threads

Back
Top