Strange index notation for linear transformation matrix

In summary, the "Strange index notation for linear transformation matrix" explores a unique way of representing linear transformations using indices to denote matrix elements. This notation emphasizes the relationships between inputs and outputs of transformations, allowing for a clearer understanding of how matrices operate on vector spaces. It can facilitate manipulations and derivations in linear algebra by providing an alternative framework that may simplify complex expressions and enhance clarity in mathematical proofs.
  • #1
sphyrch
37
9
I'm reading Liang's book on General Relativity and Differential Geometry, and came across this part:

If there is a basis transformation ##e'_j=A^i_{\ \ j}e_i## in a vector space ##V## and the (non-degenerate) matrix constituted by elements ##A^i_{\ \ j}## is denoted by ##A##, then the corresponding dual basis transformation is $$\epsilon'^j=(\tilde A^{-1})_i^{\ \ \ j}\epsilon^i$$ where ##\tilde A=A^T## (transpose)

Remark: Here we write the matrix elements as ##A^i_{\ \ j}##. The reason for distinguishing the upper and lower indices is to distinguish summation and to distinguish the type of a tensor. However, what is important in the matrix operation is just differentiating the left and right indices. Therefore, if you want, you may change all upper indices to lower indices for now; for instance the last equation can be written as ##\epsilon'_j=(\tilde A^{-1})_{ij}\epsilon_i##.

I just want to have a crystal clear understanding of why this notation is chosen. Basis transformation would be an automorphism from ##V## to ##V##, and there's a result saying that the set of automorphisms is isomorphic to ##T_1^1(V)##. So ##A## can be identified with a tensor that eats a covector and a vector => so its index representation (i.e. element indexing) is to be identified with some ##(1,1)## tensor ##T##, i.e. ##T^i_{\ \ j}=T(\epsilon^i,e_j)##. So we also represent ##A## by ##A^i_{\ \ j}##.

But I'm confused why we're writing staggered index notation for ##(\tilde A^{-1})## as ##(\tilde A^{-1})_i^{\ \ \ j}## - is there some concrete reason? Should I imagine ##(\tilde A^{-1})## as some automorphism from ##V^*## to ##V^*##, and using similar logic as before, say that it can be identified with some tensor ##\tilde T\in T^1_1(V^*)##? In that case, elements of ##\tilde T## are represented as ##\tilde T(e'_i,\epsilon'^j)=\tilde T_i^{\ \ \ j}##, so we represent ##(\tilde A^{-1})## by ##(\tilde A^{-1})_i^{\ \ \ j}##?

For the dual basis transformation, why can't I just write ##\epsilon'^j=A^j_{\ \ \ k}\epsilon^k##, so that ##A^i_{\ \ \ j}A^j_{\ \ \ k}=\delta^i_{\ \ \ k}##. OR why can't I write ##\epsilon'^j=A^j_{\ \ \ k}\epsilon^k## and ##\epsilon'_j=A_j^{\ \ \ i}e_i##, so that ##A_j^{\ \ \ i}A^j_{\ \ \ k}=\delta^i_{\ \ \ k}##?
 
Physics news on Phys.org
  • #2
If you have a vector ##V^i## and a metric ##g_{ij}## then you have the dual vector ##\omega_i=g_{ij}V^j##. If you also have a vector basis transformation matrix ##A^i{}_j## then$$\begin{eqnarray*}
V'^i&=&A^i{}_jV^j\\
g'_{ij}V'^j&=&g'_{ik}A^k{}_jV^j\\
\omega'_i&=&g'_{ik}A^k{}_jg^{jl}\omega_l
\end{eqnarray*}$$We would like the ##g'Ag## term on the right to be our transformation matrix for covectors. Obviously we can just multiply out the components to get a single matrix, and it makes sense to follow the rules for tensors and say that the metric and its inverse lowered and raised an index. Hence the transformation matrix should have lower and upper indices.
 
  • #3
Thanks for the response! But I'm sorry @Ibix I couldn't follow - I think I did not frame my question very well. My doubt isn't why one index is up and the other is down - it's why the indices are staggered in that particular way. There are three paragraphs in my OP, first para I'm pretty sure about (though you can correct if I'm wrong). So that explains why indices are staggered the way they are in ##e'_j=A^i_{\ \ \ j}e_i##.

Regarding 2nd paragraph, that's a hand-wavy explanation I've cooked up that I'm not at all sure about. I don't even know why the transpose of ##A^i_{\ \ \ j}## is written as ##\tilde A^{\ \ \ i}_j##. See here (an excerpt from the book) for an example - 3rd equality first term:
1714987481876.png

And my question in the final paragraph still stands..
 
  • #4
If I understand the question in the last paragraph, the reason why the indices are laid out the way they are is a convention. For example, Carroll's lecture notes use this one but note that Schutz lays out coordinate transforms with northwest/southeast indices always.

I don't have strong feelings either way (except to wish that we could get all physicists together and agree a set of conventions for once...), but I have a weak preference for the standard in this thread. If you had a tensor ##T^i{}_j## and you applied the metric and inverse metric ##g_{ik}T^k{}_lg^{lj}## you would write the result as ##T_i{}^j##, wouldn't you? ##A## is a matrix not a tensor, but you can still multiply out the components of ##g'_{ik}A^k{}_lg^{lj}## and it would make sense to place the indices in a consistent way.

I have to say that matrix algebra and tensor notation aren't quite the same thing and you can mislead yourself if you try to see too many parallels. I don't think you "have to" notate a transpose in any particular way. It doesn't even make sense to talk about the transpose of a mixed-index tensor. I think it works for the transformation matrix, but I'm not sure that I'd think of it that way.
 
  • #5
Ibix said:
For example, Carroll's lecture notes use this one but note that Schutz lays out coordinate transforms with northwest/southeast indices always
Note that Carroll'book, instead, uses only northwest/southeast indices.
 

Similar threads

Replies
2
Views
1K
Replies
1
Views
650
Replies
1
Views
579
Replies
14
Views
2K
Replies
6
Views
2K
Back
Top