Can any matrix be considered a tensor?

In summary, tensors are physical objects that have a physical reality independent of the basis representation chosen for them. They transform according to specific rules under a change of coordinates and not all n-vectors or n x n matrices are tensors. The most general mathematical definition of a tensor is a scalar-valued function, linear in each of its arguments, of q vectors and p dual vectors with respect to a specific vector space. In physics, the term tensor is often used loosely and can refer to objects that are not strictly tensors with respect to a specific vector space.
  • #1
dnquark
13
0
From the list of very fundamental things I am confused about:

Let's say I have two bases and a transformation matrix T that allows me to convert between them, like so:

[tex]A'_i = T_{ik} A_k[/tex], where A and A' express the same vector in the two bases.

If I have a second rank tensor, it will transform in a similar way:

[tex]C'_{ij} = T_{ik} T_{jl} C_{kl}[/tex], or in the matrix notation:

[tex]C' = T C T^T[/tex]
On the other hand, if I consider C as a matrix that acts on the vector A, [tex]C A = B[/tex], I can write
[tex]B' = T B = T C A = T C T^{-1} T A = (T C T^{-1}) A'[/tex].

This all is very basic and familiar, but I'm having trouble understanding what exactly this implies about the connection between matrices and tensors. It seems that the formulas are saying the following: "you can always convert matrices between bases using similarity transformation. However, you cannot call your matrix a tensor unless your bases are connected by an orthogonal transformation, at which point we would have [tex]B' = ( T C T^T) A' = C' A'[/tex]". Is this a correct statement?.. But if so, this means that whether or not something is a tensor is determined not by the intrinsic properties of that object, but by the specifics of how one picks the bases and converts between them, which seems a little odd.

Can some linear algebra guru shed some light on what's going on here?..
 
Mathematics news on Phys.org
  • #2
I don't know the answer, but I think the difference is larger than you suggest- a tensor is NOT a matrix, but it can be represented by a matrix, depending on how you look at it.

For example, the electromagnetic field at a fixed point is a tensor: this tensor can be expressed as a 4x4 matrix, but the elements of this matrix depend on your velocity relative to that fixed point. In other words, the tensor can have multiple matrix representations depending on how you view it. But the underlying EM field tensor does not change.
 
  • #3
A tensor is a physical object, which has a physical reality which transcends the basis representation you choose. By contrast, a matrix can be any collection of numbers, and the different components of the matrix need not be related at all. Think of a rank 1 tensor, i.e. a vector in 3D space. It has certain physical attributes, such as its length and direction, which do not change regardless of how you look at it or what coordinate system you choose to represent it. In contrast, any old collection of three numbers or three functions can be a (3x1) matrix, and this need not have any physical reality at all.
 
  • #4
A tensor is something that transforms according to very specific rules under a change of coordinates. Something that does not transform according to those rules is not a tensor. It is important to distinguish what a tensor represents from how a tensor is represented. What a tensor represents transcends its representation. The inertia tensor of some object is the same tensor in any set of axes. While the representation of that tensor changes in with a change of coordinates, the tensor is still the same tensor.

Not all n-vectors are tensors. To be a tensor that vector has to transform as a rank 1 covariant or contravariant tensor. Similarly, not all n×n matrices are tensors. To be a tensor, a matrix has to transform as a rank 2 tensor (there are three types: (2,0) tensors, (1,1) tensors, and (0,2) tensors).
 
  • #5
My problem is that I've heard all these statements before -- but can anyone point me to a concrete, preferably numerical, example of when something is or is not a tensor?..
 
  • #6
I figured it out. The problem was sloppy conversion between index and matrix notation (to this day I haven't seen a good reference that explains the best way to perform it). In any case, if you do this carefully and introduce distinction between upper and lower indices, defining contraction accordingly, it will be clear that what I thought was a transpose is actually a matrix inverse. Thus, "tensor" transformation reduces to the similarity transformation of a matrix.

Another moral of the story is that the condition under which you can ignore distinctions between upper and lower indices is the case of orthonormal bases -- in which case transpose and inverse of transformation are one and the same, so everything is self-consistent in the formulas as written in the first post.
 
  • #7
The most general mathematical definition of a tensor I've found is this: a type-(p,q) tensor with respect to a vector space, V is a scalar-valued function, linear in each of its arguments, of q vectors (i.e. elements of the underlying set of V) and p dual vectors (i.e. elements of the underlying set of V*, the dual space of V). The dual space is a second vector space over the same field as V whose vectors are the set of all scalar-valued linear functions of vectors in the underlying set of V. So, you could have matrices which are tensors with respect to some vector space, provided you choose an oppropriate vector space and an appropriate class of matrices.

But when people talk in a physics context about what is or isn't a tensor, they're usually discussing whether an object is or isn't a tensor with respect to some particular (often implicit) vector space or kind of vector space. In relativity, it's the tangent spaces defined at each point of a spacetime manifold. Tensors with respect to a tangent space are called simply "tensors", and contrasted with the various coordinate representations that each of these tensors can be given, or with other tensor-like functions, such as pseudo-tensors, whose value depend on coordinates in some way, and therefore aren't simply tensors with respect to the tangent spaces.

And, of course, the word tensor is also often used loosely in a physics context as a synonym for tensor field.
 
  • #8
Well the short answer is NO.

Tensors can be represented by matrices, but only by square matrices.

Of course not all matrices are square.

In Physics a reality can be given to entities we call tensors in mathematics defined by:

A tensor is a multilinear form, invariant with respect to a specified group of coordinate transformations in n-space.
This definition encapsulates what others have already said here.

The important point from this definition is the word linear. Tensor algebra is a branch of linear algebra.
The theorems etc are predicated upon linear mathematics and do not hold in the non-linear arena.

I don't know in what context you are studying tensors, but the Schaum series book by Mase: Continuum Mechanics has an accessible introduction to tensors and related objects and their usefulness.
 
Last edited:
  • #9
Example: the gamma matrices in the Dirac equation. The four matrices together labeled by an index mu ranging from zero to 3, transform as a four-vector. But the individual four by four gamma matrices do not transform, they are therefore not tensors.
 
  • #10
In as much as every square matrix represents a linear transformation from a space to itself, then every matrix represents a (1,1) tensor since linear transformations are naturally isomorphic to (1,1) tensors.
 
  • #11
dnquark said:
From the list of very fundamental things I am confused about:

Let's say I have two bases and a transformation matrix T that allows me to convert between them, like so:

[tex]A'_i = T_{ik} A_k[/tex], where A and A' express the same vector in the two bases.

It is customary to notate vectors as variables with upper indices and dual vectors as variables with lower indices. In any case, to obtain a dual vector from a tensor acting on a dual vector:

[tex]A'_i = T_{i}{^k} A_k \ .[/tex]

Now, a square matrix acts from the left on a column vector and spits out another column vector. We take column vectors to be vectors and row vectors to be dual vectors. The tensor equivalent of this matrix operation is:

[tex]B^i = T^{i}{_k} A^k \ .[/tex]

Edit: I see eok20 has already pointed this out, though it helps to see it written out.

Rasalhague said:
The most general mathematical definition of a tensor I've found is this: a type-(p,q) tensor with respect to a vector space, V is a scalar-valued function, linear in each of its arguments, of q vectors (i.e. elements of the underlying set of V) and p dual vectors (i.e. elements of the underlying set of V*, the dual space of V). The dual space is a second vector space over the same field as V whose vectors are the set of all scalar-valued linear functions of vectors in the underlying set of V. So, you could have matrices which are tensors with respect to some vector space, provided you choose an oppropriate vector space and an appropriate class of matrices.

But this doesn't seem to fully define a tensor... I hadn't really thought about it before. A tensor is a vector in its own right, and obeys the axioms of a vector space over a field.
 
Last edited:
  • #12
Studiot said:
Well the short answer is NO.

Tensors can be represented by matrices, but only by square matrices.

Of course not all matrices are square.

In Physics a reality can be given to entities we call tensors in mathematics defined by:

A tensor is a multilinear form, invariant with respect to a specified group of coordinate transformations in n-space.
This definition encapsulates what others have already said here.

The important point from this definition is the word linear. Tensor algebra is a branch of linear algebra.
The theorems etc are predicated upon linear mathematics and do not hold in the non-linear arena.

I don't know in what context you are studying tensors, but the Schaum series book by Mase: Continuum Mechanics has an accessible introduction to tensors and related objects and their usefulness.

Yes, I agree with you. Tensor is more wide than matrix. In tensor calculation, use matrix is more convenient than components .
 
  • #13
Then there are rectangular tensors whose indices do not range over the same values.
 
  • #14
Then there are rectangular tensors whose indices do not range over the same values.

It would be instructive to display some examples, since the propertis of square matrices are just those we require for tensors viz the existence of a determinant, inverse and so on.
 
  • #15
Phrak said:
But this doesn't seem to fully define a tensor... I hadn't really thought about it before. A tensor is a vector in its own right, and obeys the axioms of a vector space over a field.

Thanks for pointing that out, Phrak. Does this modified version work?

A type-(p,q) tensor with respect to an n-dimensional vector space, V, is a scalar-valued function, linear in each of its arguments, of q vectors (i.e. elements of the underlying set of V) and p dual vectors (i.e. elements of the underlying set of V*, the dual space of V), such that this function belongs to the underlying set of an n(p+q)-dimensional vector space, T, over the same field as V.

At first, I wrote in an exception for constant functions, defined as type-(0,0) tensors, i.e. scalars, but then it occurred to me that these can be considered 1-dimensional vectors if we take the field addition as also vector addition.
 
  • #16
Rasalhague

You can check for yourself that a type(p,q) tensor is an element of a vector space against the set of axioms of a vector space, http://mathworld.wolfram.com/VectorSpace.html"

But what are the bases of the vector space? It's useful to consider an example. Take the tensor A with row elements [A1,A2] and [A3,A4] on a manifold with dimension R2.

We can define the bases of the tensor in terms the bases imposed on the manifold.

[tex]\hat{\theta}_1 = \hat{e}_1 \otimes \hat{e}_1[/tex]
[tex]\hat{\theta}_2 = \hat{e}_1 \otimes \hat{e}_2[/tex]
[tex]\hat{\theta}_3 = \hat{e}_2 \otimes \hat{e}_1[/tex]
[tex]\hat{\theta}_4 = \hat{e}_2 \otimes \hat{e}_2[/tex]

In this example A is a four dimensional vector with \hat\theta bases.

[tex]A = A^{i}\hat{\theta}_i[/tex]

Bring in the mathematicians who can, no doubt, criticize the sloppy language, but this is the general idea.

There's an interesting element to all this. If A is a type (1,0) tensor in R4, how is the dual basis represented in the original R2 space?
 
Last edited by a moderator:
  • #17
I suppose it makes more sense to express it the other way around, since "linear" doesn't mean anything unless we have some way to add tensors and multiply them by scalars:

A type-(p,q) tensor with respect to an n-dimensional vector space, V, is an element of the underlying set of an n(p+q)-dimensional vector space, T, over the same field as V, such that the tensor is a scalar-valued function, linear in each of its arguments, of q vectors (i.e. elements of the underlying set of V) and p dual vectors (i.e. elements of the underlying set of V*, the dual space of V).

Or if it's preferable not to define them in advance as vectors, but to let that fact emerge:

A type-(p,q) tensor with respect to an n-dimensional vector space, V, is a scalar-valued function of q vectors (i.e. elements of the underlying set of V) and p dual vectors (i.e. elements of the underlying set of V*, the dual space of V), on which we define binary operations

+(A,B) = C, where A, B and C are all type-(p,q) tensors;
*(s,X) = Y, where X, Y are type-(p,q) tensors, and s a a scalar with respect to V,

such that the tensor is linear in each of its arguments.
 
  • #18
Phrak said:
There's an interesting element to all this. If A is a type (1,0) tensor in R4, how is the dual basis represented in the original R2 space?

I don't understand this part. Maybe I'm missing something obvious... If A is a type-(1,0) tensor in R4, and we select a basis for R4, then at most two dual vectors of the dual basis can be represented in R2. With respect to R4, maybe R2 would be where type-(-2,0) tensors live, if there is such a thing, but I haven't seen any definition of tensors where p or q are negative, and I don't know what it would mean. (I should have specified in my definition that p and q are nonnegative integers.)
 
  • #19
Your descriptive language is better than mine.

What I was asking, was how to go from the dual space in theta back to the dual of the original space with bases e.

So, starting again with this

[tex]\hat{\theta}_1 = \hat{e}_1 \otimes \hat{e}_1[/tex]
[tex]\hat{\theta}_2 = \hat{e}_1 \otimes \hat{e}_2[/tex]
[tex]\hat{\theta}_3 = \hat{e}_2 \otimes \hat{e}_1[/tex]
[tex]\hat{\theta}_4 = \hat{e}_2 \otimes \hat{e}_2[/tex]

then the dual vector space in theta has bases [itex]\hat{\omega}_i[/itex], where

[tex]\hat{\omega}^i \hat{\theta}_j = \delta^i_j[/tex]

There is also a dual space to the vector space in e, [itex]\hat{a}^\mu[/itex], where

[tex]\hat{a}^\mu \hat{e}_\nu = \delta^\mu_\nu[/tex]

I don't mean to imply I have looked at it. I was suddenly curious as to the relationship between a and omega. Of course, the original tensor could be of mixed type, so it could get more involved.
 
  • #20
I don't know if this counts as an answer or is just a restatement of the problem, but I think it would depend on which of the 24 permutations of the numbers 1,2,3 and 4 we arbitrarily choose to label the thetas by in the equations that define them, wouldn't it? I can't see how there could be a simple, general way of relating the a's and the omegas that doesn't require us to already know how the thetas are defined in terms of the original basis, unless there was some universal convention for labelling them. But if we know the thetas in terms of the e's, we can get the a's simply and directly from the e's.
 

FAQ: Can any matrix be considered a tensor?

Can any matrix be considered a tensor?

No, not all matrices can be considered tensors. A tensor is a mathematical object that represents a linear relationship between sets of algebraic objects. Matrices are a specific type of tensor that represent linear transformations between vector spaces.

What is the difference between a matrix and a tensor?

A matrix is a two-dimensional array of numbers, while a tensor can have any number of dimensions. In addition, a matrix represents a linear transformation between two vector spaces, while a tensor represents a linear relationship between sets of algebraic objects.

Are there different types of tensors?

Yes, there are different types of tensors, such as scalars, vectors, and higher-order tensors. Scalars are rank 0 tensors, vectors are rank 1 tensors, and matrices are rank 2 tensors. Higher-order tensors have a rank greater than 2 and represent more complex relationships between sets of algebraic objects.

Can a matrix be converted into a tensor?

Yes, a matrix can be converted into a tensor by adding additional dimensions and defining the appropriate relationships between the algebraic objects. However, not all matrices can be easily converted into tensors, and some may not have a meaningful interpretation as a tensor.

How are tensors used in science and engineering?

Tensors are used in various fields of science and engineering, such as physics, engineering, and computer science. They are used to represent and analyze complex relationships between sets of data, and are especially useful in fields that deal with multidimensional data. Tensors are also used in machine learning and artificial intelligence to process and analyze large datasets.

Similar threads

Replies
6
Views
1K
Replies
11
Views
2K
Replies
1
Views
4K
Replies
2
Views
2K
Replies
11
Views
2K
Back
Top