- #1
Rasalhague
- 1,387
- 2
I've seen a Riemannian metric tensor defined in terms of a matrix of its components thus:
[tex]g_{ij} = \left [ J^T J \right ]_{ij}[/tex]
and a pseudo-Riemannian metric tensor:
[tex]g_{\alpha \beta} = \left [ J^T \eta J \right ]_{\alpha \beta}[/tex]
where J is a transformation matrix of some kind. I've been trying to figure out which transformation matrix is meant here, as the symbol J and the term Jacobian matrix are used differently by different textbooks. Some use it for a matrix that transforms basis vectors, others for the inverse of this, which transforms components/coordinates. As I recall, in A Quick Introduction to Tensor Analysis, Shapirov uses the terms direct transition matrix S and indirect transition matrix T = S-1. But as these names are rather opaque, I'll call them the basis transformation matrix B and the component (coordinate) transformation matrix C = B-1.
I think that in the above definitions, J = B (rather than C), giving us, for the component matrix of a Riemannian metric tensor:
[tex]g_{ij} = \left [ B^T B \right ]_{ij}.[/tex]
The next question is, which of the following matrices is meant: B1 or B2?
[tex]B_1 \begin{bmatrix} ... \\ \vec{\textbf{e}}_i \\ ...
\end{bmatrix} = \begin{bmatrix} ... \\ \vec{\textbf{e}}_i \\ ...
\end{bmatrix}' \qquad \textup{or} \qquad \begin{bmatrix} ... & \vec{\textbf{e}}_i & ... \end{bmatrix} B_2 = \begin{bmatrix} ... & \vec{\textbf{e}}_i & ... \end{bmatrix}'[/tex]
I experimented with the transformation from Cartesian to plane polar coordinates, and concluded that B = B1, and BT = B2, since
[tex]\begin{bmatrix} \cos \theta & \sin \theta \\ -r \sin \theta & r \cos \theta
\end{bmatrix} \begin{bmatrix} \vec{\textbf{e}_x} \\ \vec{\textbf{e}_y} \end{bmatrix} = \begin{bmatrix} \vec{\textbf{e}}_r \\ \vec{\textbf{e}}_{\theta} \end{bmatrix}[/tex]
and if we call this matrix B, then
[tex]B^T B = \begin{bmatrix} \cos \theta & -r \sin \theta \\ \sin \theta & r \cos \theta
\end{bmatrix} \begin{bmatrix} \cos \theta & \sin \theta \\ -r \sin \theta & r \cos \theta \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & r^2 \end{bmatrix} = g[/tex]
and then
[tex]\textup{d}s^2 = g_{ij} \, \textup{d}y^i \textup{d}y^j = \textup{d}r^2 + r^2 \, \textup{d}\theta^2[/tex]
which is the formula given for a line element in plane polar coordinates. I'm using yi to represent the coordinates of the current (new) system, and xi for coordinates of the previous (old) system. So if I've got this right,
[tex]B = \begin{bmatrix} \frac{\partial x^1}{\partial y^1} & \frac{\partial x^2}{\partial y^1} \\ \frac{\partial x^1}{\partial y^2} & \frac{\partial x^2}{\partial y^2} \end{bmatrix} \qquad \textup{and} \qquad B^T = \begin{bmatrix} \frac{\partial x^1}{\partial y^1} & \frac{\partial x^1}{\partial y^2} \\ \frac{\partial x^2}{\partial y^1} & \frac{\partial x^2}{\partial y^2} \end{bmatrix}.[/tex]
With any luck, that gives:
[tex]\left [ B^T B \right ]_{ij} = B_{ki} B_{kj} = g_{ij}[/tex]
summing over the indices on the y's.
But when I write this with index notation, I get
[tex]\frac{\partial x^i}{\partial y^k} \frac{\partial x^j}{\partial y^k} = g_{ij}[/tex]
which breaks the rules for where to write indices, since the summed over indices are on the same level as each other, and the free indices one the left of the equality are on a different level to the free indices on the right. Have I made a mistake somewhere?
And how can I reconcile this with equation (10) of this article at Wolfram Mathworld, defining the pseudo-Riemannian metric tensor?
[tex]g_{\mu\nu} = \frac{\partial \xi^{\alpha}}{\partial x^{\mu}} \frac{\partial \xi^{\beta}}{\partial x^{\nu}} \eta_{\alpha\beta}[/tex]
http://mathworld.wolfram.com/MetricTensor.html
Supposing they're using x's as in eq. (1) to denote the coordinates of the current system, they seem to have the partial differentials the opposite way round to me. How would the equivalent definition for the components of a Riemannian metric tensor be written correctly? Is it possible (desirable? conventional?) to define the metric tensor's component matrix in terms of the component transformation matrix C?
I wondered if I might have inadvertantly defined the inverse metric tensor, but that doesn't seem to work, since the inverse matrix of what I defined as g doesn't correctly define the line element:
[tex]\sum_{i=1}^{2} \sum_{j=1}^{2} g^{ij} \, \textup{d}y^i \textup{d}y^j = \textup{d}r^2 + r^{-2} \, \textup{d}\theta^2.[/tex]
[tex]g_{ij} = \left [ J^T J \right ]_{ij}[/tex]
and a pseudo-Riemannian metric tensor:
[tex]g_{\alpha \beta} = \left [ J^T \eta J \right ]_{\alpha \beta}[/tex]
where J is a transformation matrix of some kind. I've been trying to figure out which transformation matrix is meant here, as the symbol J and the term Jacobian matrix are used differently by different textbooks. Some use it for a matrix that transforms basis vectors, others for the inverse of this, which transforms components/coordinates. As I recall, in A Quick Introduction to Tensor Analysis, Shapirov uses the terms direct transition matrix S and indirect transition matrix T = S-1. But as these names are rather opaque, I'll call them the basis transformation matrix B and the component (coordinate) transformation matrix C = B-1.
I think that in the above definitions, J = B (rather than C), giving us, for the component matrix of a Riemannian metric tensor:
[tex]g_{ij} = \left [ B^T B \right ]_{ij}.[/tex]
The next question is, which of the following matrices is meant: B1 or B2?
[tex]B_1 \begin{bmatrix} ... \\ \vec{\textbf{e}}_i \\ ...
\end{bmatrix} = \begin{bmatrix} ... \\ \vec{\textbf{e}}_i \\ ...
\end{bmatrix}' \qquad \textup{or} \qquad \begin{bmatrix} ... & \vec{\textbf{e}}_i & ... \end{bmatrix} B_2 = \begin{bmatrix} ... & \vec{\textbf{e}}_i & ... \end{bmatrix}'[/tex]
I experimented with the transformation from Cartesian to plane polar coordinates, and concluded that B = B1, and BT = B2, since
[tex]\begin{bmatrix} \cos \theta & \sin \theta \\ -r \sin \theta & r \cos \theta
\end{bmatrix} \begin{bmatrix} \vec{\textbf{e}_x} \\ \vec{\textbf{e}_y} \end{bmatrix} = \begin{bmatrix} \vec{\textbf{e}}_r \\ \vec{\textbf{e}}_{\theta} \end{bmatrix}[/tex]
and if we call this matrix B, then
[tex]B^T B = \begin{bmatrix} \cos \theta & -r \sin \theta \\ \sin \theta & r \cos \theta
\end{bmatrix} \begin{bmatrix} \cos \theta & \sin \theta \\ -r \sin \theta & r \cos \theta \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & r^2 \end{bmatrix} = g[/tex]
and then
[tex]\textup{d}s^2 = g_{ij} \, \textup{d}y^i \textup{d}y^j = \textup{d}r^2 + r^2 \, \textup{d}\theta^2[/tex]
which is the formula given for a line element in plane polar coordinates. I'm using yi to represent the coordinates of the current (new) system, and xi for coordinates of the previous (old) system. So if I've got this right,
[tex]B = \begin{bmatrix} \frac{\partial x^1}{\partial y^1} & \frac{\partial x^2}{\partial y^1} \\ \frac{\partial x^1}{\partial y^2} & \frac{\partial x^2}{\partial y^2} \end{bmatrix} \qquad \textup{and} \qquad B^T = \begin{bmatrix} \frac{\partial x^1}{\partial y^1} & \frac{\partial x^1}{\partial y^2} \\ \frac{\partial x^2}{\partial y^1} & \frac{\partial x^2}{\partial y^2} \end{bmatrix}.[/tex]
With any luck, that gives:
[tex]\left [ B^T B \right ]_{ij} = B_{ki} B_{kj} = g_{ij}[/tex]
summing over the indices on the y's.
But when I write this with index notation, I get
[tex]\frac{\partial x^i}{\partial y^k} \frac{\partial x^j}{\partial y^k} = g_{ij}[/tex]
which breaks the rules for where to write indices, since the summed over indices are on the same level as each other, and the free indices one the left of the equality are on a different level to the free indices on the right. Have I made a mistake somewhere?
And how can I reconcile this with equation (10) of this article at Wolfram Mathworld, defining the pseudo-Riemannian metric tensor?
[tex]g_{\mu\nu} = \frac{\partial \xi^{\alpha}}{\partial x^{\mu}} \frac{\partial \xi^{\beta}}{\partial x^{\nu}} \eta_{\alpha\beta}[/tex]
http://mathworld.wolfram.com/MetricTensor.html
Supposing they're using x's as in eq. (1) to denote the coordinates of the current system, they seem to have the partial differentials the opposite way round to me. How would the equivalent definition for the components of a Riemannian metric tensor be written correctly? Is it possible (desirable? conventional?) to define the metric tensor's component matrix in terms of the component transformation matrix C?
I wondered if I might have inadvertantly defined the inverse metric tensor, but that doesn't seem to work, since the inverse matrix of what I defined as g doesn't correctly define the line element:
[tex]\sum_{i=1}^{2} \sum_{j=1}^{2} g^{ij} \, \textup{d}y^i \textup{d}y^j = \textup{d}r^2 + r^{-2} \, \textup{d}\theta^2.[/tex]
Last edited: