- #1
Rising Eagle
- 38
- 0
I have been puzzling over a best point of view to comprehend the true algebraic nature of tensors for years now.
With vector spaces, I similarly puzzled and concluded that vector spaces are basically sets of abstract members that satisfy a closure on linearity relationship (i.e., any linear space). I realized that mappings are used given some selection of basis vectors and coordinate system type (i.e., cartesian, cylindrical, etc) to map the abstract members to useful and manipulable objects such as ordered n-tuples, column matrices, polynomials, or any other object with which we can manipulate component coefficients. I always see the components as not the vectors themselves, but as just scale factors relative to the bases, which are themselves abstract unknowns. I concluded that the vector nature of abstract member vectors are not understood by their individual internal workings (which are usually inaccessible anyway), but by their relationship to all other members of the space. Of course one may assign a very manipulable type of object as the interpretation of the members (e.g., polynomials, matrices of nxn for some n, displacement vector), but the properties of the individual members of the space are not vector properties, as the vector properties lie in and are the sole domain of the space as a whole.
For tensors, my struggle is ongoing and I seek able guidance. I thought to define a tensor as a member of a linear space. Simple and complete. For this to work it must be proven that any member of any definable linear space is a tensor and vice versa. Is there such a proof or perhaps counter example proving my point of view to be in error?
I have also thought that it may be more correct to define tensor as any member from any definable inner product space. Can it be proven that any member of any definable inner product space is a tensor and vice versa?
This is the first step in a series of steps I wish to make to develop a cohesive understanding of tensors as an algebraic mathematical space.
My next step is to show that covariance and contravariance are simply different representations that are mapped from the same tensor space of some order such that that dual space and primal space (or any mixture of the two for order > 1) are just different faces of the same inner product space. Here I would like to show that there truly is no distinction between the set of linear functionals on a vector space and the vector space itself. Functionals, unfortunately inspire a visual of a math object that has an operand input and and output, whereas vector space has the role of a passive operand. However, any functional from the dual space when paired with a vector maps each vector in the primal space to a member of its underlying scalar field, and likewise, any vector from the primal space when paired with a functional maps each functional from the dual space to a member of its underlying scalar field (which is, by definition, the same scalar field in both spaces). This tells me that the two spaces truly are on equal footing (like the Bra and Ket of quantum mechanics). And if tensors are of necessity members of inner product spaces, I believe the connection between covariant and contravariant representations should be easier to prove and their presence would be more application related than as a fundamental property of the tensor space.
My next step is to show that there is a set of tensor spaces, each of a different class (class means the order of the tensors in the space where the dimensional cardinality of each ordering is specified) where inner and outer (tensor) product and their various combinations are closed within this set of tensor spaces. That is, it is an operation where a member from one tensor class is paired with a tensor from the same or another class to yield a tensor from either or a third class. Of course there are compatibility requirements that depend on any occurrences of inner product contractions when these operations are applied.
My final step is to understand how tensor bases (in analogy with that of vector spaces) and class parameters may be specified such that I can map an abstract tensor space of some given class and its full glory of tensor properties to manipulable objects which are amenable to analysis.
If there are other steps I must take to complete my cohesive understanding, I have not ascertained it/them yet. I wish to eliminate gaps in my understanding of how basic algebraic concepts develop into fully fledged mathematical objects used in diff geom and related theories in physics.
Any enlightenment relevant to any or all of the above paragraphs on tensors (and vectors too) is welcome.
With vector spaces, I similarly puzzled and concluded that vector spaces are basically sets of abstract members that satisfy a closure on linearity relationship (i.e., any linear space). I realized that mappings are used given some selection of basis vectors and coordinate system type (i.e., cartesian, cylindrical, etc) to map the abstract members to useful and manipulable objects such as ordered n-tuples, column matrices, polynomials, or any other object with which we can manipulate component coefficients. I always see the components as not the vectors themselves, but as just scale factors relative to the bases, which are themselves abstract unknowns. I concluded that the vector nature of abstract member vectors are not understood by their individual internal workings (which are usually inaccessible anyway), but by their relationship to all other members of the space. Of course one may assign a very manipulable type of object as the interpretation of the members (e.g., polynomials, matrices of nxn for some n, displacement vector), but the properties of the individual members of the space are not vector properties, as the vector properties lie in and are the sole domain of the space as a whole.
For tensors, my struggle is ongoing and I seek able guidance. I thought to define a tensor as a member of a linear space. Simple and complete. For this to work it must be proven that any member of any definable linear space is a tensor and vice versa. Is there such a proof or perhaps counter example proving my point of view to be in error?
I have also thought that it may be more correct to define tensor as any member from any definable inner product space. Can it be proven that any member of any definable inner product space is a tensor and vice versa?
This is the first step in a series of steps I wish to make to develop a cohesive understanding of tensors as an algebraic mathematical space.
My next step is to show that covariance and contravariance are simply different representations that are mapped from the same tensor space of some order such that that dual space and primal space (or any mixture of the two for order > 1) are just different faces of the same inner product space. Here I would like to show that there truly is no distinction between the set of linear functionals on a vector space and the vector space itself. Functionals, unfortunately inspire a visual of a math object that has an operand input and and output, whereas vector space has the role of a passive operand. However, any functional from the dual space when paired with a vector maps each vector in the primal space to a member of its underlying scalar field, and likewise, any vector from the primal space when paired with a functional maps each functional from the dual space to a member of its underlying scalar field (which is, by definition, the same scalar field in both spaces). This tells me that the two spaces truly are on equal footing (like the Bra and Ket of quantum mechanics). And if tensors are of necessity members of inner product spaces, I believe the connection between covariant and contravariant representations should be easier to prove and their presence would be more application related than as a fundamental property of the tensor space.
My next step is to show that there is a set of tensor spaces, each of a different class (class means the order of the tensors in the space where the dimensional cardinality of each ordering is specified) where inner and outer (tensor) product and their various combinations are closed within this set of tensor spaces. That is, it is an operation where a member from one tensor class is paired with a tensor from the same or another class to yield a tensor from either or a third class. Of course there are compatibility requirements that depend on any occurrences of inner product contractions when these operations are applied.
My final step is to understand how tensor bases (in analogy with that of vector spaces) and class parameters may be specified such that I can map an abstract tensor space of some given class and its full glory of tensor properties to manipulable objects which are amenable to analysis.
If there are other steps I must take to complete my cohesive understanding, I have not ascertained it/them yet. I wish to eliminate gaps in my understanding of how basic algebraic concepts develop into fully fledged mathematical objects used in diff geom and related theories in physics.
Any enlightenment relevant to any or all of the above paragraphs on tensors (and vectors too) is welcome.
Last edited: