Math Newb Wants to know what a Tensor is

  • Thread starter StonedPanda
  • Start date
  • Tags
    Tensor
In summary: This is a 3-tensor, which is a kind of tensor that takes three other tensors as input and produces a fourth tensor as output. The fourth tensor is the "result" of the tensor operation.In summary, Math "Newb" wants to know what a Tensor is and what they're useful for. From what he's gathered, a Tensor is a vector with two bits of information. They're useful for solving equations and understanding physics.
  • #36
gvk said:
First of all, covariant and contravariant vectors are not different vectors. They represent ONE VECTOR (an arrow :-) in two different coordinate systems (dual, or reciprocal, or skew, or...coordinates). The reciprocal system is equally satisfactory for representing vectors, but 'contravariant' vector looks exactly the same as 'covariant'. So "visualize" them as ONE tangent arrow (toothpick) if you wish. Two parallel blades, probably, mean direct and reciprocal coordinate planes, which may have complement scale or orientation, but, of course, should be parallel (no less no more).
Secondly, any quantity that we wish to define, be it scalar, vector, or tensor, must be independent of the special coordinate system. We shell adopt this as fundamental principal. However, its representation will depend on the particular system.

I think what may of confused you is that for a Euclidean vector space the contravariant and covariant vectors are the 'same' (i.e. in any given frame a pair of dual vectors have the same compoents) as the compoents of the metric are simply the compoents of an identity matrix.

But in general a vector and it's one-form belong to different spaces, infact the dual space of some linear vector space S is the set of all linear functions that map a vector in S to some (in general complex) number and it also constitutes a linear vector space in it's own right. Further the components of a pair of dual vectors Aν and Aν in the same frame are not in general the same (also belonging to differnet spaces they have different bases).
 
Last edited:
Physics news on Phys.org
  • #37
a "vector" is the derivative of a curve, i.e. of a map from R^1 to R^n, whereas a covector (dual vector) is the opposite, the gradient of a function from R^n to R^1. (at one point)

That's why usually a vector is a column, and a covector is a row. So one way to remember them is by the sound "covector" equals "rowvector".

Geometrically a vector is a tangent vector to a curve, while a covector (dual vector, gradient vector) is a normal vector to a level surface.


(Unfortunately covectors are only covariant in differential geometry, they are contravariant in category theory, algebraic topology, and the rest of mathematics.)
 
Last edited:
  • #38
chroot said:
The difference between a vector and a dual vector is not just a change of coordinate system, gvk. Vectors and dual vectors live in entirely different spaces, and are certainly not the same vector.

I know what you're trying to say: a vector is related to its dual via a one-to-one mapping.

- Warren
No, not only this I'm trying to say, Warren.
You completely forgot who asked "what a tensor is?" and what is the level of this person. He/she is a student of High School! If you will teach them in such manner, they never understand anything about vector and tensor calculus at all, and will never ask you again.
It is the same as you start to teach them QM before they learn anything about classical mechanics. But educational sequence in math plays even more important role than in natural sciences. Here everything should be sequential and well understood in elementary manner.
So, they first need understand that "a vector is an arrow or column", and "a tensor is a matrix" and what are the properties of these notions in Euclidean space. This is most important, because they may use the tensors in the future just to caclulate stress and deformations in materials. :smile:
 
  • #39
i cannot agree with you less, gvk. this high school, student is also a student of multivariable calculus at UCLA. As such I think he is capable of understanding what things mean, not just the symbols used for them.
 
  • #40
gvk said:
You completely forgot who asked "what a tensor is?" and what is the level of this person.
You were replying to a question of mine, not the original poster's.
He/she is a student of High School! If you will teach them in such manner, they never understand anything about vector and tensor calculus at all, and will never ask you again.
Purposefully simplifying material for pedagogical purposes does not give you license to say things that are patently false. You could say "vectors and dual vectors are closely related, and can be thought of as different "representations" of the same fundamental thing -- that would be fine. Saying that vectors and dual vectors are related through a coordinate transformation, on the other hand, is quite wrong, and would probably do more harm to the student's understanding than good.

- Warren
 
  • #41
chroot said:
You were replying to a question of mine, not the original poster's.
Purposefully simplifying material for pedagogical purposes does not give you license to say things that are patently false. You could say "vectors and dual vectors are closely related, and can be thought of as different "representations" of the same fundamental thing -- that would be fine. Saying that vectors and dual vectors are related through a coordinate transformation, on the other hand, is quite wrong, and would probably do more harm to the student's understanding than good.
- Warren


Do you think the student should start first to learn non-euclidean geometry,
modern notion 'dual space', and then come back to Euclidean case?
I don't think this is a right way. I deeply convince and say again: math education should be in sequential order, any new stuff should overlap the current level of knowledge without gaps, any new material should be accompanied with lots of simple examples, any new notions should be explained in connection with old ones. Student should learn first the classical stuff with contravarient, covarient notations for vectors and tensors (using direct and reciprocal systems), and then turn to more complex.


And I did not tell that "vectors and dual vectors are related through a coordinate transformation". This is your interpretation.
Read carefully: "covariant and contravariant vectors are not different vectors. They represent ONE VECTOR (an arrow :-) in two different coordinate systems". I meant, of course, euclidean case.
Is that 'patently false'?
 
  • #42
mathwonk said:
i cannot agree with you less, gvk. this high school, student is also a student of multivariable calculus at UCLA. As such I think he is capable of understanding what things mean, not just the symbols used for them.
What are you talking about?
I have a son in High School of MN, and have 30 yr. experience of teaching (in other country), and can tell you that US school's education in math is about 1.5-2,5 years behind the Europe's. Moreover, it's 1yr. behind what I learned at the same age as son many many years ago.
At the age 14 we knew all elementary and trigonometric functions, algebra (linear and quadratic equations), geometry on the plane and much much more. Now, in 10th grade, they start to learn so called 'integrated math' which has only 1/4 what i mentioned.
How do you expect they will to know the term "rank" of matrix or "homogeneous" transformation at the end of HS?
I guess, it is impossible with this level of math education, even a guy attended multivariable calculus at UCLA, he still missed a lots of math.
 
  • #43
gvk,

A vector and its dual do not even exist in the same vector space, even when the spaces are Euclidean. It is indeed patently false to assert they are in any way related through a coordinate transform, no matter what kind of spaces you're dealing with. I feel sorry for your students if you are willing to commit such grave errors for nothing more than simplifying your pedagogy.

- Warren
 
  • #44
You do not seem to realize, gvk, that not all high schools in the US are the same low level. You are of course correct that the average high school and even most high schools, are at a low level, but there are some high schools which are different. And even at average high schools there are students who are different.

I have for example taught at a high school where at least some of my 10th graders learned completeness of real numbers, countability and uncountability, archimedean axioms, and limits, with complete proofs. Other high schoolers there studied Galois theory with me. In another class I taught linear algebra, matrices, vector geometry of n dimensions, and calculus of several variables including integration of differential forms, There are other high schools in US where students are at even higher levels.

I have had one high school student who led my class in graduate PhD preparation level algebra at a major state university. He graduatd from university and high school simultaneously.

Your comments may be correct for a generic case but do not seem to apply to the student we are discussing.
 
Last edited:
  • #45
Could you please give me a little bit more detail about those high schools (city, state). I never heard about such unique cases here.
Sorry for digressions in posts. When I read a reply from StonedPanda and it sounded to me as a discouragment:
"I definitely picked the wrong name for this forum!"
 
  • #46
So then, I begin to realize that the low level is a worldwide problem not only in Spain happens. Well, this is a very poor consolation, but... still is
 
  • #47
chroot said:
A vector and its dual do not even exist in the same vector space, even when the spaces are Euclidean.
- Warren
Euclidean vector space with metric is coincided with its dual, so it is the same, R = R*.
The most students learn from elementary examples and only later move to the general definitions. It is not "my pedagogy", it's a natural way of learning. (I am sure you were on the same way too. By the way, in what age and where did you learn about dual space and covarience? Highschool or college?). However, in more general cases, you are right.
 
  • #48
gvk said:
Euclidean vector space with metric is coincided with its dual, so it is the same, R = R*.
The most students learn from elementary examples and only later move to the general definitions. It is not "my pedagogy", it's a natural way of learning. (I am sure you were on the same way too. By the way, in what age and where did you learn about dual space and covarience? Highschool or college?). However, in more general cases, you are right.
I continue to disagree. A vector space and its dual space are not the same, even when the two spaces are both Euclidean. Just because two objects have the same components, for example, does not mean they are really the same. You still need to use the metric to convert between them -- it just happens that the metric is the identity matrix.

You seem to missing the point, anyway. If you're going to teach students anything about dual vector spaces, you should teach them correct things. I'm not saying you should lump all the complexity of dual spaces on them at once, but saying "vectors and dual vectors are related through a coordinate transform" is just patently false. Thankfully, you're not a teacher.

- Warren
 
  • #49
gvk, try to understand what chroot is saying. there is big difference between saying there is an isomorphism between two spaces and saying they are the same.

i.e. a "metric" or dot product on euclidean space allows us to make a nice one to one identification of the space and its dual by matching up the vector v with the operator

v.( ), but that does not mean that a vector and dotting with that vector are the same.

This is the common problem with epople struggling elsewhere on this site with understanding "tensors". If you just lok at the notational representation of an object and not at what it means, or how it behaves, you lose most of the understanding, and many things look the same.

The key property of any identification is how it behaves under mappings, and a dualk space behaves exactlky the opposite from the roiginal space in this regard.

I.e. if f:V-->W is a linear mapping, then there is a natural mapping in the other direction on duals f*:W*-->V* taking the functional n:W-->R, acting on vectors in W, to the functional f*(n) = nof: V-->W-->R.

If you use the dot product to identify R^n with its dual, then a linear map of R^n given by a matrix T, should correspond under this identification to the linear map given by the transpose matrix, not to the same matrix.

If the spaces had become "equal" one might think one could use the same matrix.


best wishes,

PS: Some good high schools in US include the Paideia School in Atlanta, Andover and Exeter in New Hampshire I believe, Brooklyn technical high school and Bronx High shool of science in New York. Other students at average high schools attend university while still in hogh school. My older son e.g. easily exceled in calculus classes at Georgia Tech before being accepted to both Stanford and Harvard. I believe the high school student who was the top performer in my university level algebra class was from Gainesville high in Georgia. Westminster in Atlanta is also very strong, and there are many others. New Trier high school in Winnetka? Illinois is very famous.
 
Last edited:
  • #50
here is an example of when two different vector spaces can be regarded as almost the same: let V be any vector space over the real scalars, and V* = Hom(V,R) = its dual space, the space of all linear functions from V to R. Then let V** = (V*)*

= Hom(V*,R) be the dual of V*, the space of all linear mappings from V* to R.

Then it is possible to identify V with V**, when V is finite dimensional, so that they are essentially the same. Just let a vector v in V be the map fron V* to R, given by "evaluation at v". I.e. if n:V-->R is a linear operator in V*, define v(n) = n(v).


This, when V is finite dimensional, is an isomorphism with a nice property:

when ever there is given a linear map T:V-->W, the associated map T**:V**-->W**

taking an operator s:V*-->R, in V**, to the operator T**(s) = (soT*):W*-->R, in W**. i.e. such that, if t:W-->R is an element of W*, then (T**(s))(t) = (soT*)(t)

= s(T*(t)) = s(toT).

confusing isn't it?

But anyway, one can "easily show" that for any two maps (SoT):V-->W-->U,

that (SoT)** = S** o T**:V**-->W**-->U**.

The point is that not only can one make the spaces V and V** correspond, oine can also make maps between them correspond in a natural way.

In aprticular, under the isomorphisms above of V-->V** and W-->W**, for any map T:V-->W, the compositions V-->V**-->W**,

and V-->W-->W**, are equal.

Challenge: There is no way in the world to do this for dual spaces! I.e. there is no way whatever to choose an isomorphism V-->V* from each space to its dual space, and a compatible correspondence between maps taking T to T*, with the nice properties above.


this is called category theory, and these well behaving guys, like the correspondence V-->V**, are called "natural transformations". Basically it means you should be aware of how maps between your spaces behave, as much as or more than, how points in your spaces look.

You might enjoy the original article laying out these ideas, by I think Eilenberg and MacLane. now about 50 or 60 years old.
 
Last edited:
  • #51
May be this text can clearify the meanings of covariance and contravariance for the less mathematical people

http://www.mathpages.com/home/kmath398.htm
 
Last edited by a moderator:
  • #52
If a quantity is covariant it means it transforms like the basis vectors, contrvriant means it traNsforms oppositely to basis vectors.
 
  • #53
look, if f:X-->Y is a differentiable map, and v is a tangent vector to X, and df is the derivative of f, then df(v) is a tangent vector to Y. this means v transforms "covariantly" in category language, i.e. in the same direwction as the map f, i.e.; from X to Y.

If on the other hand, q is a covector on Y, i.e. a function that sends a tangent vector on Y to a number, then df*(q) is a covector on X, as follows: if v is a tangent vector on X, then df*(q)(v) = q(df(v)). thus df* takes covectors on Y to covectors on X, i.e. covectors go in the oposite direction from the map f.

thats all there is to it.

of course the confusion is that in differential geometry, which i suppose means also in physics, the terminology is backwards.
 
  • #54
mathwonk said:
If on the other hand, q is a covector on Y, i.e. a function that sends a tangent vector on Y to a number, then df*(q) is a covector on X, as follows: if v is a tangent vector on X, then df*(q)(v) = q(df(v)). thus df* takes covectors on Y to covectors on X, i.e. covectors go in the oposite direction from the map f.

mathwonk,

i couldn't agree with you more. this well-defined behavior of differential forms with respect to pullback is one of the most intruiging and useful properties of forms. a more subtle point is also that these are still well defined even if the inverse mapping is not! tensors in general, do not enjoy this property.

i have nothing against the physicists way of doing math, but personally i find the terminology and way that they go about doing tensor analysis extremely confusing. when i was first learning this stuff (and am still learning it) i started by reading books that taught tensors the classical way: "something is a tensor if it transforms in blah blah way under a coordinate change blah blah". i did this for some time until i realized that this is an extremely confusing way of going about it...i then picked up frankel's and bishop's books on the subject and everything made perfect sense.

i think that one of the main hurdles is that in classical tensor analysis, ppl dealt with the components of a tensor rather than the tensor itself! this is still a major point of confusion in the subject. furthermore, why bother saying that a tensor is something that changes in a certain way with coordinate change, when it is nearly always impractical to work in terms of coordinates anyway??
 
  • #55
gvk said:
First of all, covariant and contravariant vectors are not different vectors. They represent ONE VECTOR (an arrow :-) in two different coordinate systems (dual, or reciprocal, or skew, or...coordinates). The reciprocal system is equally satisfactory for representing vectors, but 'contravariant' vector looks exactly the same as 'covariant'..

This is correct.

Incidentally, when we refer to a vector (or, more generally, a tensor) as being either contravariant or covariant we're abusing the language slightly, because those terms really just signify two different conventions for interpreting the components of the object with respect to a given coordinate system, whereas the essential attributes of a vector (or tensor) are independent of the particular coordinate system in which we choose to express it. In general, any given vector or tensor can be expressed in both contravariant and covariant form with respect to any given coordinate system
from http://www.mathpages.com/rr/s5-02/5-02.htm
 
  • #56
Peterdevis said:

No it isn't; as has been pointed out before so-called contravariant and covariant vectors belong to two different vector spaces (infact a covariant vector is infact a linear function of a cotravariant vector and vice versa).

IMHO this confusion arises because people are used to delaing with Cartesian tensors where the distinction is isn't obvious.

All your link is really saying is that there exists a certain one-to-one correpsondnace between a vector space and it's dual vector space that maps a vector to it's dual vector, this one-to-one corresponadance is called the metric and is a tensor of rank (0,2). Clearly identifying a pair of dual vectors as a single vector is incorrect.
 
Last edited:
  • #57
JSCD said:
All your link is really saying is that there exists a certain one-to-one correpsondnace between a vector space and it's dual vector space that maps a vector to it's dual vector, this one-to-one corresponadance is called the metric and is a tensor of rank (0,2)

No, it isn't! The problem is that most math people only see one to one correspondances and not what object tensors really are. Tensors are primally
object who are independant of the coördinatesystem.
When you talking about covariant or contravariant vectors, you talk about the components of the tensor, expressed in a certain basis. Both components and basis are coördinate dependent!
When you take "tangent vectors" as basic you call the vectorcomponents covariant because when you change the coördinate system the components transform by the covariant transformationrule (and the basic vectors bij the contravarianttransformation rule).
By not seeing that covariant and contravariant components describing the same object, you don't understand the power of the concept of tensors.
 
  • #58
What is this object they are describing? As I said earlier the only object I can see that they are describing is a pair of dual vectors, but from it's very name you can see that we think of this object as two differet objects!

These days we don't tend to define tensors by their tranformation rules, we instea dprefer to define them multilienar functions of vectors and covectors (vector = contravriant vector, covector = covaraint vector), so already in our definition of a tensor we've described vectors and covectors as different objects! Also by seeing them as the same object you may well miss out on the fact that covectors are linear functions which map a single vector to a number or that a vector is a linear function that maps a covector to a number.
 
  • #59
Note that when we raise or lower the index of a vector or a covector what we are really doing is multiplying metric by the vector/covector.
 
  • #60
Wow I read this entire thread and I fully don't understand tensors. I don't understand a lot of the notation (is that the word? :-) or even the application.

I'm 22 and come from more of a design background, I haven't really done any maths since I was about 15 but tonight I was hoping I'd be able to get the hang of "general relativity". (I was never bad at maths, btw, just impossibly slack.)

I have a solid grasp of 3d modelling/animation software as far as x,y,z verticies can animate along a timeline while being joined (via "vectors"?) to unlimited numbers of other verticies. Will this help me comprehend tensors?

Specific question, what do the little right pointing arrows signify in maths?

Cheers
You guys are amazing.
Pete

PS John Napier (1550 - 1617) is a direct great^x grandad of mine! Lol, and I've never studied logarithms until tonight.
 
  • #61
pete66 said:
Specific question, what do the little right pointing arrows signify in maths?

On top of a letter it signifies that this is a vector, but I assume that is not your question.

V-->W signifies a 'mapping' from V to W. This is another name for a transformation. It is usually a function that takes objects from a space V to W. In linear algebra for example the mapping is really a matrix multiplication.It brings one vector from a certain vector space (V) to another (W).
 
  • #62
"An nth-rank tensor in m-dimensional space is a mathematical object that has n indices and components and obeys certain transformation rules. Each index of a tensor ranges over the number of dimensions of space. However, the dimension of the space is largely irrelevant in most tensor equations (with the notable exception of the contracted Kronecker delta). Tensors are generalizations of scalars (that have no indices), vectors (that have exactly one index), and matrices (that have exactly two indices) to an arbitrary number of indices.

Tensors provide a natural and concise mathematical framework for formulating and solving problems in areas of physics such as elasticity [more generally constitutive models of materials], fluid mechanics, and general relativity."

from http://mathworld.wolfram.com/Tensor.html

all one ever wanted to know about tensors and then some. :biggrin:
 
  • #63
Astronuc said:
"An nth-rank tensor in m-dimensional space is a mathematical object that has n indices and components and obeys certain transformation rules. Each index of a tensor ranges over the number of dimensions of space. However, the dimension of the space is largely irrelevant in most tensor equations (with the notable exception of the contracted Kronecker delta). Tensors are generalizations of scalars (that have no indices), vectors (that have exactly one index), and matrices (that have exactly two indices) to an arbitrary number of indices.
:

i'm going to go out on a limb here and say that this definition is the whole problem with the way tensors are looked at...it doesn't really tell you anything meaningful. just saying that something follows certain transformation rules does not help you...

a tensor is a multilinear mapping, as follows:

[tex] T: V^* \times V^* \times V^* \times ... \times V \times V \times V ... \rightarrow R [/tex]

where the number of [tex]V^*[/tex]'s is some [tex]k[/tex], and the number of [tex]V[/tex]'s is some [tex]l[/tex], then we have a tensor of rank (k,l).

so it maps some combination of vectors and covectors into reals, and it does so in a multilinear way. so a [tex](2,2)[/tex] tensor, for instance, would be:

[tex]T(\alpha, \beta, \vec{v}, \vec{w}) = a_i b_i v_j w_j T(dx^{\alpha i}, dx^{\beta i}, \partial_{v j}, \partial_{w j})[/tex]

and let's look at the metric tensor:

[tex]G = g_{ij} dx_i \otimes dx_j [/tex]

let's feed it two vectors:

[tex] G(\vec{v}, \vec{w}) = g_{ij} dx^i(\vec{v}) dx^j(\vec{w}) = g_{ij} dx^i(v^i \partial_i) dx^j(w^j \partial_j) = g_{ij} v^i w^j dx^i(\partial_i) dx^j(\partial_j) = g_{ij} v^i w^j[/tex]

most importantly, and the whole point that is lost with the wolfram definition, is that [tex]g_{ij}[/tex] is NOT the metric tensor, it is the components of the metric tensor. The metric tensor is the whole expression for [tex]G[/tex].

I can prove it to you: [tex]G[/tex] is invariant under a coordinate change, that is it will always map you to the same values no matter what coordinates you are in. [tex]g_{ij}[/tex] on the other hand, will be different in various coordinate systems. look at the tensor [tex]G[/tex] and you will see that this _must_ be the case.

this is no different than confusing the components of a vector, with a vector itself! This is the problem with the so-called "classical definition": it defines a tensor in terms of what it's components do which is unnatural and quite frankly not very useful.
 
  • #64
The question is:


[tex]G = g_{ij} dx_i \otimes dx_j =g^{ij} \partial_{i} \otimes \partial_{j} ?[/tex]
 
  • #65
Peterdevis said:
The question is:


[tex]G = g_{ij} dx_i \otimes dx_j =g^{ij} \partial_{i} \otimes \partial_{j} ?[/tex]

let's see:

[tex] G^{-1}(\alpha, \beta) = g^{ij} \partial_i(\alpha) \partial_j(\beta) = g^{ij} \alpha(\partial_i) \beta(\partial_j) = g^{ij} v_i w_j = g^{ij} g_{ij} v^j w_j = v^j w_j = g_{ij} v^j w^i [/tex]

by symmetry of the inner product (by definition) however, [tex]g_{ij} = g_{ji} [/tex] so we can write:

[tex] g_{ij} v^j w^i = g_{ij} v^i w^j = G(\vec{v}, \vec{w}) [/tex]

so [tex] G^{-1}(\alpha, \beta) = G(\vec{v}, \vec{w}) [/tex]

which defines our relationship between vectors and covectors using this so-called metric tensor.
 
  • #66
jcsd said:
What is this object they are describing? As I said earlier the only object I can see that they are describing is a pair of dual vectors, but from it's very name you can see that we think of this object as two differet objects!

I don't think you have read the link http://www.mathpages.com/rr/s5-02/5-02.htm. When you look at the drawing you see the object P and his covariant en contravariant expression. So one object two expressions!
 
  • #67
Peterdevis said:
I don't think you have read the link http://www.mathpages.com/rr/s5-02/5-02.htm. When you look at the drawing you see the object P and his covariant en contravariant expression. So one object two expressions!

I believe that both yourself and jcsd are correct.

jcsd is saying that tensors are multilinear mappings that are coordinate independent, and that we can define everything in terms of linear functionals mapping to reals. this is true. infact we can even go so far as to say that [tex] \alpha [/tex] is the covector such that [tex] G^{-1}( . , \alpha) = G( . , \vec{v}) [/tex] is satisfied for some vector [tex] \vec{v} [/tex] (where the first arguments are fixed by bilinearity) and use the metric tensor as a linear transformation to the dual space.

Peterdevis is saying that the components and basis are very much coordinate dependent (by themselves), and that this is important to understand the transformation laws. this is also true.

Am I not mistaken here?

I will add one thing though: our basis vectors and the components of our covectors, even when used tensorially, are NOT the same even though they follow the same transformation laws. they live in different spaces altogether. example: we can pushforward the basis of our tangent vectors (as defined by our differential map or the Jacobian) but we cannot push forward the components of our covectors, we must use the pullback.
 
  • #68
That's what I am trying to say we're talking vectors that live in different vector spaces, so we really shouldn't see them as the same object even if there exists an importnat bijection between them (the metric tensor). In fact it's worth noting thta the scalr product isn't always defined, in these cases are we then discussing incomplete objects?

Peter Deis isn'tthat far off the mark he's just viewing tensors by the old definition. I have an old maths textbook thta tlaks aboiut the contravariant and covariant componets of a vector., but I can't say I liket that approach at all (especially as it talks about vector spaces and linear operators in a basis indepednt way, but when it comes to tensors it suddenly starts to define tem purely in terms of how the compoents change between basis, even though it notes that (some) linear operators are tensors).
 
  • #69
In my opinion, anybody who thinks that the properties of covariance or contravariance are not intrinsic properties of an object, is quite innocent of what is going on in differential geometry and manifold theory.

Perhaps they are confused by the phenomenon of a Riemannian metric, i.e. a smoothly varying dot product on the tangent bundle, since if one has a dot product, one can artificially represent a vector as a covector, by dotting with it. But then it is a different object. I.e. the operation of dotting with a vector is not the same object as the vector it self.

As to convincing such people:

They said it couldn't be done.
They laughed when I said I would do it.
They said that it couldn't be done.
I rolled up my sleeves and went to it.

I struggled, I strove, I strained.
I fought at it day and night.
They said that it couldn't be done.
They were right.
 
  • #70
jcsd said:
I have an old maths textbook thta tlaks aboiut the contravariant and covariant componets of a vector., but I can't say I

jcsd,

i also have many older books that explain tensors this way, and have also been puzzled as to why they would want to approach the subject in this manner. as mathwonk pointed out, the real power of differential geometry will remain a mystery in that context.

it is also interesting to note that differential forms will be well-defined with respect to pullback, even if the inverse mapping isn't. tensors in general (and certaintly not vectors) do not enjoy this property, they are entirely dependent upon the mapping to be bijective. so certain topological mappings (such as irreversible deformations) can still be modeled using forms - this is awesome, and something that i am trying to learn more about now.
 

Similar threads

Replies
22
Views
2K
Replies
8
Views
2K
Replies
2
Views
2K
Replies
12
Views
1K
Replies
37
Views
3K
Back
Top