Why is a Matrix a Tensor of Type (1,1)?

In summary, a matrix is a tensor of type (1,1) because it is of order 2 and it has the property that its transpose (or dual) is also of type (1,1). A tensor of (0,2) would not be a matrix, because it would not have the order 2. A row vector is a tensor of type (1,1) if and only if its product with a column vector is also a (1,1) tensor. A column vector is a tensor of type (1,1) if and only if its product with a row vector is also a (1,1) tensor.
  • #1
mmmboh
407
0
I decided to take out a book and read about tensors on my own, but am having a bit of trouble, mainly with regards to the indexing (although I understand, at least superficially covariant and contravariant). Why is a matrix a tensor of type (1,1)? Obviously it is of order 2, but why not (2,0) or (0,2)? Does it have to do with the fact that a matrix consists of rows and columns, and a tensor (1,0) is a row vector, and (0,1) is a column vector? I understand that upon coordinate transformations, a tensor of (1,1) transforms a certain way, but I don't see why a matrix must transform as a (1,1) tensor necessarily.

Can someone help me out please?
 
Physics news on Phys.org
  • #2
Note that we often write the components of rank (2,0) and (0,2) tensors as matrices (as for example when we write a metric) but this is a corruption of the proper notation. When we do so we are "kludging" a bit by invoking the transpose which is a highly basis dependent entity.

The best way to see the rank of a tensor is to contract with sufficient rank (0,1) and (1,0) vectors to get to a scalar. Under contraction the ranks of opposite type must cancel.

To get a scalar from a matrix you must multiply on the left by a row vector and on the right by a column vector.

If you have an object which maps 3 row vectors and two column vectors to a scalar then it is a rank (2,3) tensor since it cancels 2 rank (0,1) tensors and 3 rank (1,0) tensors whose product is then a rank (3,2) tensor. The (2,3) can cancel the (3,2) to yield a scalar.

The more proper statement is that matrices when used as linear operators are rank (1,1).

This is easiest to see and natural when you use index notation... (repeated indices in terms are summed over via Einstein's convention.)
[itex] MA = B[/itex] with M a matrix operator and A and B column vectors:
In component form:
[itex](\sum_j)M^i_j A^j = B^i[/itex]

If we really wanted to write a metric (on the space of column vectors) properly we should write it as a row vector of row vectors (rank (2,0) tensor).

[itex]g = ((2,0),(0,3))[/itex]

A metric applied to one vector gives its "geometric transpose" or "orthogonal dual" (I'm sure there's a more proper name for this but it escapes me at the moment.):
[itex] g(X) = g\left(\begin{array}{c}x_1\\ x_2\end{array}\right)=( (2,0)\, , \, (0,3) )\left(\begin{array}{c}x_1\\ x_2\end{array}\right)= (2,0)x_1 + (0,3)x_2[/itex]
[itex] = (2 x_1,3 x_2)[/itex]
g has mapped a column vector to a row vector. Applying to two column vectors gives their dot product under that metric:
[itex] g(X,Y) = (gX)Y = (2x_1, 3 x_2)\left(\begin{array}{c}y_1\\ y_2\end{array}\right)=2x_1 y_1 + 3 x_2 y_2[/itex]

Similarly a dual metric (rank (0,2)) would be written as a column vector of column vectors. I'll skip typesetting as it takes up too much space.

Typically we try to work in an ortho-normal basis (and as long as we're working with a Euclidean space) the metric will then in matrix form look like the identity matrix. In an ortho-normal basis then the matrix transpose will correspond to the "orthogonal dual".
 
  • #3
Ok so I get the motivation behind it, but are you saying that you can't actually represent a (2,0) or (0,2) as a matrix.

Also I still don't quite understand what the contraction argument has to do with covariance and contravariance. Why does a matrix transform like a (1,1) tensor, ie. for a matrix, [tex]A_j^{*i}=A_n^m\frac{\partial z^i}{\partial x^m}\frac{\partial x^n}{\partial z^j} [/tex] and why a row vector necessarily transforms in a contravariant way, and the same goes for column vectors in a covariant way. This covariant and contravariant indexing is my biggest confusion with tensors so far, if I could get passed this I think I could move on quickly.
 

FAQ: Why is a Matrix a Tensor of Type (1,1)?

What is a Matrix?

A matrix is a rectangular array of numbers or symbols arranged in rows and columns. It is commonly used to represent data or mathematical equations.

What is a Tensor?

A tensor is a mathematical object that describes the relationships between different sets of data. It can represent multiple dimensions and is commonly used in physics and engineering.

Why is a Matrix a Tensor of Type (1,1)?

A matrix can be thought of as a special case of a tensor, where the first number in the type represents the number of rows and the second number represents the number of columns. Therefore, a matrix of size (m,n) can be considered a tensor of type (0,2) or (2,0). However, since a matrix is most commonly used to represent linear transformations, it is often referred to as a tensor of type (1,1).

What is the difference between a Matrix and a Tensor?

The main difference between a matrix and a tensor is that a matrix is a 2-dimensional array, while a tensor can have multiple dimensions. A matrix is a special case of a tensor, where the number of dimensions is limited to two. Tensors are also more general and can represent more complex relationships between data sets.

How is a Matrix used in real-world applications?

Matrices have a wide range of applications in various fields such as engineering, computer science, economics, and physics. They are used to represent data sets, solve linear equations, and perform operations such as transformation and rotation. Matrices are also used in machine learning algorithms for data analysis and prediction.

Back
Top