Lorentz transformation matrix and its inverse

In summary: I don't understand the question concerning the transformation law of the Faraday tensor. As a tensor of 2nd rank it transforms like a tensor of rank 2.
  • #1
dyn
773
62
Given the Lorentz matrix Λuv its transpose is Λvu but what is its transpose ? I have seen ΛuaΛub = δb a which implies an inverse. This seems to be some sort of swapping rows and columns but to get the inverse you also need to replace v with -v ? Also In the LT matrix is it the 1st slot that represents rows or the top index ?
 
Physics news on Phys.org
  • #2
dyn said:
Given the Lorentz matrix Λuv its transpose is Λvu but what is its transpose ?
I presume you meant to ask what is its inverse (not transpose).
I have seen ΛuaΛub = δb a which implies an inverse.
Where did you see that formula? In general it is not correct. As you say, the sign of ##v## also needs to be changed. The Lorentz matrix ##\Lambda## in a basis can be expressed as ##A^{-1}L(v)A## where ##A## is the matrix for changing coordinates from the given basis to one in which the ##x## axis points in the direction of motion, and

$$L(v)=
\gamma \left( \begin{array}{ccc}
1 & -\beta & 0 & 0 \\
-\beta & 1 & 0 & 0 \\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\end{array} \right)$$

where ##\beta\equiv\frac{v}{c}##.
Then the inverse of this is ##A^{-1}L(v)^{-1}A## and

$$L(v)^{-1}=
\gamma \left( \begin{array}{ccc}
1 & \beta & 0 & 0 \\
\beta & 1 & 0 & 0 \\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\end{array} \right)$$
 
Last edited:
  • Like
Likes dyn
  • #3
A Lorentz-transformation matrix is defined as a ##\mathbb{R}^{4 \times 4}## matrix that keeps the Minkowski pseudometric ##\eta_{\mu \nu}=\mathrm{diag}(1,-1,-1,-1)## invariant, which means
$${L^{\mu}}_{\rho} {L^{\nu}}_{\sigma} \eta_{\mu \nu} = \eta_{\rho \sigma}.$$
Written in matrix notation this reads
$$\hat{L}^T \hat{\eta} \hat{L}=\hat{\eta}.$$
Since ##\hat{\eta}=\hat{\eta}^{-1}##, multiplication with ##\hat{\eta}## from the left and with ##\hat{L}^{-1}## from the right, gives
$$\hat{L}^{-1}=\hat{\eta} \hat{L}^{T} \hat{\eta}.$$
For a rotation-free boost with three-velocity ##\vec{v}##, you have
$$\hat{L}_B(\vec{v})=\begin{pmatrix}
\gamma & -\gamma \vec{v}^T \\
-\gamma \vec{v}^T & (\gamma-1) \vec{v} \otimes \vec{v}+\mathbb{1}_3
\end{pmatrix}.$$
Then you indeed get
$$\hat{L}_B^{-1}(\vec{v})=\hat{L}_B(-\vec{v}),$$
as it should be.
 
  • Like
Likes dyn
  • #4
Thanks for your replies.Yes my original question was about the inverse. Thanks for realising that. According to what you have said ; the following which i found in some notes seems wrong as it is just a transpose and involves no sign change " the inverse of Λa b is Λ ba. Am I right ?
I have also looked at a solution to a question involving the Faraday tensor. It involves calculating F'uv given the equation F'uv = Λ u b Λ v b Fab. So I have 3 4x4 matrices which I need to multiply together. The bit I don't understand is that the solution multiplies them together with the Fab matrix in the middle. Why has the order been changed ?
 
  • #5
dyn said:
Thanks for your replies.Yes my original question was about the inverse. Thanks for realising that. According to what you have said ; the following which i found in some notes seems wrong as it is just a transpose and involves no sign change " the inverse of Λa b is Λ ba. Am I right ?
Probably. But it depends on the context and what they meant by the symbols in the notation they were using.
I have also looked at a solution to a question involving the Faraday tensor. It involves calculating F'uv given the equation F'uv = Λ u b Λ v b Fab. So I have 3 4x4 matrices which I need to multiply together. The bit I don't understand is that the solution multiplies them together with the Fab matrix in the middle. Why has the order been changed ?
First, I think you meant to write ##\Lambda^u_a \Lambda^v_b F^{ab}##, not ##\Lambda^u_b \Lambda^v_b F^{ab}##, as the latter doesn't have any ##a## in it.

Secondly, one nice thing about Einstein notation is that, unlike matrix notation, it doesn't matter what order you write the factors in. What does matter is what indices you use and whether they are up or down. The choice of indices and position determines the order of matrix multiplication, not the order of presentation of the factors. So in Einstein notation

$$\Lambda^u_a \Lambda^v_b F^{ab}=\Lambda^u_a F^{ab} \Lambda^v_b =F^{ab}\Lambda^u_a \Lambda^v_b $$
 
  • #6
It is very important to keep the order of the indices in the Lorentz-transformation matrix, also the natural index pattern is that one is an upper and the other a lower index. As detailed in #3, in matrix-vector notation you have
$$\hat{\Lambda}^{-1} = \hat{\eta} \hat{\Lambda}^T \hat{\eta}.$$
Let's translate this into the index notation
$${(\hat{\Lambda}^{-1})^{\mu}}_{\nu} = \eta^{\mu \rho} \eta_{\nu \sigma} {\Lambda^{\sigma}}_{\rho}={\Lambda_{\nu}}^{\mu}.$$
Here, one defines the index lowering and raising operation as if the Lorentz matrix was a tensor (which it is of course not). So your formula is correct.

I don't understand the question concerning the transformation law of the Faraday tensor. As a tensor of 2nd rank it transforms like a Kronecker product of two vectors, i.e., like ##x^{\mu} x^{\nu}##, i.e.,
$$\overline{F}^{\mu \nu}(\overline{x}) = {\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma} F^{\rho \sigma}(\hat{\Lambda}^{-1} \overline{x}).$$
Here it is important to realize that the Faraday tensor is in fact a tensor field, and on the right-hand side it depends on the old coordinates, which I have expressed in terms of the new ones. One must not forget to also transform the argument of fields in the proper way!
 
  • #7
Thanks again for taking the time to reply. sorry to be a pain here but I'm still confused.
andrewkirk said:
Secondly, one nice thing about Einstein notation is that, unlike matrix notation, it doesn't matter what order you write the factors in. What does matter is what indices you use and whether they are up or down. The choice of indices and position determines the order of matrix multiplication, not the order of presentation of the factors. So in Einstein notation

$$\Lambda^u_a \Lambda^v_b F^{ab}=\Lambda^u_a F^{ab} \Lambda^v_b =F^{ab}\Lambda^u_a \Lambda^v_b $$

I'm confused about this as it seems to imply matrices can be multiplied in any order and give the same answer which as far as I know is not true in general for matrices. Also I thought the indices for tensors should not be places one above another in a vertical line ?

vanhees71 said:
Let's translate this into the index notation
$${(\hat{\Lambda}^{-1})^{\mu}}_{\nu} = \eta^{\mu \rho} \eta_{\nu \sigma} {\Lambda^{\sigma}}_{\rho}={\Lambda_{\nu}}^{\mu}.$$
Here, one defines the index lowering and raising operation as if the Lorentz matrix was a tensor (which it is of course not). So your formula is correct.
This seems to say that the inverse of $${{\Lambda}^{\mu}}_{\nu} $$ is $${\Lambda_{\nu}}^{\mu}$$
but this is just the transpose. It doesn't contain the necessary sign change.
 
  • #8
dyn said:
I'm confused about this as it seems to imply matrices can be multiplied in any order and give the same answer which as far as I know is not true in general for matrices.
##\Lambda## is a matrix. ##\Lambda^\mu{}_\nu## is a real number. Matrix multiplication isn't commutative, but the multiplication operation on the set of real numbers is.

dyn said:
Also I thought the indices for tensors should not be places one above another in a vertical line ?
Yes, it should be avoided if you intend to use the metric to raise and lower indices.

dyn said:
This seems to say that the inverse of $${{\Lambda}^{\mu}}_{\nu} $$ is $${\Lambda_{\nu}}^{\mu}$$
but this is just the transpose. It doesn't contain the necessary sign change.
If ##\Lambda## denotes a matrix, and ##\Lambda^\mu{}_\nu## denotes the number on row ##\mu##, column ##\nu## of that matrix, then the number on row ##\mu##, column ##\nu## of ##\Lambda^T## is ##\Lambda^{\nu}{}_\mu##, not ##\Lambda_\nu{}^\mu##.
 
  • Like
Likes dyn
  • #9
A Lorentz transformation matrix is a 4×4 matrix ##\Lambda## such that ##\Lambda^T\eta\Lambda=\eta##. Multiply this equation by ##\eta^{-1}## from the left, and you see that ##\Lambda^{-1}=\eta^{-1}\Lambda^T\eta##.

There's a bunch of things that we need to understand to relate this to the index notation:

##\Lambda## is the matrix of components of a type (1,1) tensor. This means that the number on row ##\mu##, column ##\nu##, is the ##{}^\mu{}_\nu## component of that tensor. That tensor is also denoted by ##\Lambda##, so its ##{}^\mu{}_\nu## component is denoted by ##\Lambda^\mu{}_\nu##.

Similarly, ##\eta## is the matrix of components of the Minkowski metric tensor ##\eta##, so the number on row ##\mu##, column ##\nu##, is the ##{}_{\mu\nu}## component of the Minkowski metric tensor, which is written as ##\eta_{\mu\nu}##.

##\eta^{-1}## is however not defined as a matrix of components of a tensor. It's simply the inverse of ##\eta##, which happens to be equal to ##\eta##. But it's still convenient to write the number on row ##\mu##, column ##\nu## of ##\eta^{-1}## as ##\eta^{\mu\nu}##, because this ensures that the summation convention works the way it's supposed to: ##\eta^{\mu\nu}\eta_{\nu\rho} =\delta^\mu_\rho##

So what are the numbers on row ##\mu##, column ##\nu## of ##\Lambda^{-1}## and ##\Lambda^T##? If we denote them by ##(\Lambda^{-1})^\mu{}_\nu## and ##(\Lambda^T)^\mu{}_\nu=\Lambda^\nu{}_\mu## respectively, and use the definition of matrix multiplication and the convention that the etas raise and lower indices, we get
$$(\Lambda^{-1})^\mu{}_\nu =(\eta^{-1}(\Lambda^T)\eta)^\mu{}_{\nu} =\eta^{\mu\rho}\Lambda^\sigma{}_\rho\eta_{\sigma\nu} =\Lambda_\nu{}^\mu.$$
 
Last edited:
  • Like
Likes cianfa72, dyn and vanhees71
  • #10
And that's why it is so important to keep also the horizontal order of indices right, as stressed above. Note that
$${\Lambda_{\nu}}^{\mu}=\eta_{\nu \rho} \eta^{\mu \sigma} {\Lambda^{\rho}}_{\sigma} \neq {\Lambda^{\nu}}_{\mu}.$$
(See also the previous posting #9 by Fredrik).
 
  • #11
My two main texts that use differential geometry - Schutz's Intro to GR and Lee's Riemannian Manifolds - take different approaches on keeping upper and lower indices in order.

Schutz, which I read first, follows the principles advocated here of always keeping them in order, hence writing things like ##{F^{ab}}_{cd}##. Lee on the other hand generally writes ##F^{ab}_{cd}##. The advantages of Lee's notation are that (1) it's faster to write, as one doesn't have to put extra braces around ##F^{ab}## before doing the ##{}_{cd}## part and (2) it takes up less horizontal space, so you have to break equation lines less often, which is a major issue in tensor operations.

The point of vanhees and Fredrik above that one can get confused if one doesn't preserve order between upper and lower indices is a good one. It prompted me to review Lee's book to see if he says anything about his choice of notation. I didn't find an explanation, but I did find that he sometimes does adopt the Schutz approach. For instance he always writes Riemann tensors, when not all indices are on the same level, in order, eg as ##{R_{abc}}^d## rather than ##R_{abc}^d##. No doubt this is in order to avoid the confusion that is warned against above. Indeed he emphasises the point that ##{R_{abc}}^d,{{R_{ab}}^c}_d,{{R_a}^b}_{cd},{R^a}_{bcd}## are all different. If you look at the latex code for the last line, you can see the mess of braces one has to write to give those symbolisations, which gives the strong temptation to ditch the ordering - but in this case it would definitely be a bad idea.

On the other hand he always writes Christoffel symbols without ordering, ie ##\Gamma^a_{bc}##. I suppose that's because nobody ever raises or lowers indices of Christoffel symbols (and yes I know that's because they're not actually tensors, but nevertheless they are written in equations mixed up with tensors).

From this I infer that his approach is a pragmatic one under which he preserves order when it matters because the order is not made obvious from the context - eg because there is raising or lowering going on. But he puts the upper indices above the lower ones when the order is not in question.

I think when one is first learning (and when one is writing for learners) it is best to use Schutz's approach, because otherwise it's easy to get confused, and there's lots of raising and lowering going on. Further, when one is doing relativity, rather than general differential geometry, one never has more than four indices, so the problem of running out of horizontal space on the page rarely arises.

Edit: I just noticed Fredrik's comment above about placing indices above one another: 'Yes, it should be avoided if you intend to use the metric to raise and lower indices.' [emphasis added by me] I guess that sounds rather like Lee's approach.
 
Last edited:
  • #12
I'd never buy a book that doesn't keep good care of the order of indices. This may work sometimes, if the tensors are symmetric, but if you have a GR book which doesn't keep track of the index order, you get confused at latest when the curvature tensor is introduced.

Already for the antisymmetric Faraday tensor in electrodynamics it's a desaster, because it's antisymmetric. What should ##F_{\mu}^{\nu}## mean? Note that ##{F_{\mu}}^{\nu}=-{F^{\nu}}_{\mu}##!
 
  • Like
Likes Geofleur
  • #13
In addition one should say that the point of proper notation is not to minimize the author's typing work but to maximize readability and convenience for the reader. If you are lucky you type a text once and have thousands of readers!
 
  • #14
andrewkirk said:
The Lorentz matrix ##\Lambda## in a basis can be expressed as ##A^{-1}L(v)A## where ##A## is the matrix for changing coordinates from the given basis to one in which the ##x## axis points in the direction of motion, and

$$
L(v)= \gamma \left( \begin{array}{ccc} 1 & -\beta & 0 & 0 \\ -\beta & 1 & 0 & 0 \\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{array} \right)
$$

where ##\beta\equiv\frac{v}{c}##.

This is not correct. The factor ##\gamma## does not multiply the entire matrix; it only multiplies the upper left 2x2 portion. The correct matrix for a Lorentz boost in the ##x## direction is:

$$
L(v)= \left( \begin{array}{ccc} \gamma & -\beta \gamma & 0 & 0 \\ -\beta \gamma & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{array} \right)
$$
 
  • Like
Likes mma and vanhees71
  • #15
PeterDonis said:
This is not correct. The factor ##\gamma## does not multiply the entire matrix; it only multiplies the upper left 2x2 portion.
Good pickup. I started off with the answer being for a 2 x 2 manifold (1 spatial dimension) to keep it simple, but then decided to include the 2 suppressed dimensions and forgot that in that case one could no longer put the ##\gamma## outside the matrix.

vanhees11 said:
one should say that the point of proper notation is not to minimize the author's typing work but to maximize readability and convenience for the reader.
Indeed, but when one is dealing with arbitrarily many dimensions rather than only four, minimising the line breaks in the middle of equations improves readability (in my opinion).
 
  • #16
Thanks for all your replies I am slowly getting there. I follow the argument in #9 that the inverse of Λuv is Λvu but if i am just given the element Λuv how do i find the corresponding element in the inverse ? I see 2 minkowski metric elements multiplied together which wouldn't produce the overall sign change.

As regards the Faraday tensor equation F'uv = Λua Λvb Fab I realize these are just elements multiplied together so the order doesn't matter but if I want to do the matrix multiplication how do I decide the order of the matrix multiplication ?
 
  • #17
dyn said:
.. if i am just given the element Λuv how do i find the corresponding element in the inverse ?
See bellow.
As regards the Faraday tensor equation F'uv = Λua Λvb Fab I realize these are just elements multiplied together so the order doesn't matter but if I want to do the matrix multiplication how do I decide the order of the matrix multiplication ?
Follow the rule of matrix multiplication: the second index of the first matrix is summed (or contracted) with the first index of the second matrix: [tex]\bar{F}^{ab} = \left( \Lambda^{a}{}_{c} \ F^{cd} \right) \ \Lambda^{b}{}_{d} = \Lambda^{b}{}_{d} \left( \Lambda \ F \right)^{ad}.[/tex] Now, let [itex]\Lambda F = B[/itex], [tex]\bar{F}^{ab} = \Lambda^{b}{}_{d} \ B^{ad} = \Lambda^{b}{}_{d} \ ( B^{T})^{da} ,[/tex] or [tex]\bar{F}^{ab} = \left( \Lambda \ B^{T}\right)^{ba} = \left( B \ \Lambda^{T}\right)^{ab} .[/tex] Therefore [tex]\bar{F} = B \ \Lambda^{T} = \Lambda \ F \ \Lambda^{T} .[/tex] But, why do you want it in matrix form? The world isn’t made of only rank-2 tensors. Any way, here are some rules and conventions you need to follows when you treat rank-2 Lorentz tensors as matrices:

i) I have already given you the first rule which is the matrix multiplication rule above.

ii) [itex]\eta[/itex] is a (0,2) tensor: [itex]\eta_{\mu\nu}[/itex]. It is also the matrix element of the diagonal matrix [itex]\eta[/itex].

iii) [itex]\eta^{\mu\nu}[/itex] is a (2,0) tensor and can be regarded as the matrix element of the inverse matrix [itex]\eta^{-1}[/itex].

iv) [itex]\Lambda[/itex] is a Lorentz group element. Lorentz group is a matrix Lie group and [itex]\Lambda[/itex], therefore, has matrix representation. The convention for its matrix element is [itex]\Lambda^{\mu}{}_{\nu}[/itex], where [itex]\nu[/itex] represents the rows (i.e. the first index on a matrix) and [itex]\nu[/itex] numerates the columns (i.e. the second index on a matrix). This convention though makes it mandatory to represent [itex]\Lambda^{-1}[/itex], [itex]\Lambda^{T}[/itex] and all other MATRIX OPERATIONS by the same index structure for their matrix element. So, like [itex]\Lambda^{\mu}{}_{\nu}[/itex], we must write [itex](\Lambda^{-1})^{\mu}{}_{\nu}[/itex], [itex](\Lambda^{T})^{\mu}{}_{\nu}[/itex] and so on.

v) Even though [itex]\Lambda^{\mu}{}_{\nu}[/itex] is NOT a tensor, we can raise and lower its indices by the metric tensor [itex]\eta[/itex]. This becomes important when dealing with the infinitesimal part of [itex]\Lambda[/itex]. Examples:
(1) The infinitesimal group parameters satisfy the following MATRIX equation, [tex](\eta \ \omega)^{T} = - (\eta \ \omega) . \ \ \ \ (1)[/tex] The [itex]\alpha \beta[/itex]-matrix element is [tex]\left( (\eta \ \omega)^{T}\right)_{\alpha \beta} = - \left( \eta \ \omega \right)_{\alpha \beta} , [/tex] or, by doing the transpose on the LHS, [tex]\left( \eta \ \omega \right)_{\beta \alpha} = - \left( \eta \ \omega \right)_{\alpha \beta} .[/tex] Following the above-mentioned rule for matrix multiplication, we get [tex]\eta_{\beta \mu} \ \omega^{\mu}{}_{\alpha} = - \eta_{\alpha \rho} \ \omega^{\rho}{}_{\beta} .[/tex] Thus [tex]\omega_{\beta \alpha} = - \omega_{\alpha \beta} . \ \ \ \ \ (2)[/tex] You can also start from (2) and go backward to (1).

(2) The defining relation of Lorentz group is given by [tex]\eta_{\mu \nu} \ \Lambda^{\mu}{}_{\alpha} \ \Lambda^{\nu}{}_{\beta} = \eta_{\alpha \beta} . \ \ \ (3)[/tex] Before we carry on with raising and lowering indices, I would like to make two important side notes on Eq(3): A) equations (1) or (2) are the infinitesimal version of Eq(3), and B) since the [itex]\Lambda[/itex]’s form a group, Eq(3) is also satisfied by inverse element, [tex]\eta_{\mu \nu} \left( \Lambda^{-1}\right)^{\mu}{}_{\alpha} \left( \Lambda^{-1}\right)^{\nu}{}_{\beta} = \eta_{\alpha \beta} . \ \ (4)[/tex]
Okay, lowering the index on the first [itex]\Lambda[/itex] in Eq(3), we obtain [tex]\Lambda_{\nu \alpha} \ \Lambda^{\nu}{}_{\beta} = \eta_{\alpha \beta} .[/tex] Now, raising the index [itex]\alpha[/itex] on both sides (or, which is the same thing, contracting with [itex]\eta^{\alpha \tau}[/itex]), we obtain [tex]\Lambda_{\nu}{}^{\tau} \ \Lambda^{\nu}{}_{\beta} = \delta^{\tau}{}_{\beta} . \ \ \ \ \ (5)[/tex] Notice that Eq(5) does not follow the rule of matrix multiplication. This is because of the funny index structure of [itex]\Lambda_{\nu}{}^{\tau}[/itex] which does not agree with our convention in (iv) above. However, we know the following matrix equation [tex]\left( \Lambda^{-1} \ \Lambda \right)^{\tau}{}_{\beta} = \delta^{\tau}{}_{\beta} .[/tex] So, using the rule for matrix multiplication, we find [tex]\left( \Lambda^{-1}\right)^{\tau}{}_{\nu} \ \Lambda^{\nu}{}_{\beta} = \delta^{\tau}{}_{\beta} . \ \ \ \ \ (6)[/tex] Comparing (5) with (6), we find [tex]\left( \Lambda^{-1}\right)^{\tau}{}_{\nu} = \Lambda_{\nu}{}^{\tau} . \ \ \ \ \ \ (7)[/tex] We will come to the (matrix) meaning of this in a minute, let us first substitute (7) in (4) to obtain: [tex]\eta_{\mu \nu} \ \Lambda_{\alpha}{}^{\mu} \ \Lambda_{\beta}{}^{\nu} = \eta_{\alpha \beta} .[/tex] This shows that we could have started with the convention [itex]\Lambda_{\mu}{}^{\nu}[/itex] for the matrix element of [itex]\Lambda[/itex]. The lesson is this, once you choose a convention you must stick with it.

v) Finally, Eq(7) means the following: giving the matrix
[tex]
\Lambda = \begin{pmatrix}
\Lambda^{0}{}_{0} & \Lambda^{0}{}_{1} & \Lambda^{0}{}_{2} & \Lambda^{0}{}_{3} \\
\Lambda^{1}{}_{0} & \Lambda^{1}{}_{1} & \Lambda^{1}{}_{2} & \Lambda^{1}{}_{3} \\
\Lambda^{2}{}_{0} & \Lambda^{2}{}_{1} & \Lambda^{2}{}_{2} & \Lambda^{2}{}_{3} \\
\Lambda^{3}{}_{0} & \Lambda^{3}{}_{1} & \Lambda^{3}{}_{2} & \Lambda^{3}{}_{3}
\end{pmatrix} ,
[/tex]

the inverse is obtained by changing the sign of [itex]\Lambda^{0}{}_{i}[/itex] and [itex]\Lambda^{i}{}_{0}[/itex] components ONLY, and then transposing ALL indices:

[tex]
\Lambda^{-1} = \begin{pmatrix}
\Lambda^{0}{}_{0} & -\Lambda^{1}{}_{0} & -\Lambda^{2}{}_{0} & -\Lambda^{3}{}_{0} \\
-\Lambda^{0}{}_{1} & \Lambda^{1}{}_{1} & \Lambda^{2}{}_{1} & \Lambda^{3}{}_{1} \\
-\Lambda^{0}{}_{2} & \Lambda^{1}{}_{2} & \Lambda^{2}{}_{2} & \Lambda^{3}{}_{2} \\
-\Lambda^{0}{}_{3} & \Lambda^{1}{}_{3} & \Lambda^{2}{}_{3} & \Lambda^{3}{}_{3}
\end{pmatrix} .
[/tex]
 
Last edited:
  • Like
Likes cianfa72, vanhees71 and dyn
  • #18
Thanks for that reply. I understand the order of the matrix multiplication now. There is one last thing that is puzzling me. Where does the negative sign come from when taking the inverse Lorentz matrix. The key equation seems to be ( Λ-1)uv = Λvu but what exactly does this mean and how does it introduce the sign change ? If the number on row u column v of the Lorentz matrix is denoted by Λ uv what does Λvu denote and where is the negative sign coming from ?
 
  • #19
The sign change comes from the index lowering and raising rule:
$${(\Lambda^{1})^{\mu}}_{\nu}={\Lambda_{\nu}}^{\mu}=\eta_{\nu \sigma} \eta^{\mu \rho} {\Lambda^{\sigma}}_{\rho}.$$
Note that in matrix notation this reads
$$\hat{\Lambda}^{-1} = (\hat{\eta} \hat{\Lambda} \hat{\eta})^{\mathrm{T}}=\hat{\eta} \hat{\Lambda}^{\mathrm{T}} \hat{\eta},$$
where I have used that ##\hat{\eta}=\hat{\eta}^{-1} = \hat{\eta}^{\mathrm{T}}=\mathrm{diag}(1,-1,-1,-1)##.
 
  • Like
Likes dyn
  • #20
I would like to thank everyone for their patience with me on this thread and their help. I finally see where the sign change comes from but it is so much easier just to remember to change every v to a -v ! I will be starting another thread soon as I have some more questions. I'm hoping you can all help me again. Many thanks.
 

FAQ: Lorentz transformation matrix and its inverse

What is the Lorentz transformation matrix and its inverse?

The Lorentz transformation matrix is a mathematical tool used in special relativity to describe how measurements of space and time change between different reference frames that are moving at constant velocities relative to each other. Its inverse is used to transform measurements back to the original reference frame.

How is the Lorentz transformation matrix derived?

The Lorentz transformation matrix is derived from the principles of special relativity, which state that the laws of physics should be the same for all observers in uniform motion. By applying these principles to the equations describing space and time, the Lorentz transformation matrix is obtained.

What is the difference between the Lorentz transformation matrix and the Galilean transformation matrix?

The Lorentz transformation matrix takes into account the effects of Einstein's special theory of relativity, including time dilation and length contraction, while the Galilean transformation matrix only considers the relative velocities of two reference frames. The Lorentz transformation matrix is used for describing phenomena at high speeds, while the Galilean transformation matrix is used for low-speed situations.

What are some practical applications of the Lorentz transformation matrix?

The Lorentz transformation matrix is essential for understanding and making calculations in special relativity. It is used in various fields, including particle physics, astrophysics, and engineering, to accurately describe and predict the behavior of objects moving at high speeds.

Are there any limitations to the Lorentz transformation matrix?

The Lorentz transformation matrix is based on the assumption that the laws of physics are the same for all inertial reference frames. It does not take into account acceleration or non-inertial reference frames. Therefore, it is not applicable in cases where these factors play a significant role, such as in general relativity.

Similar threads

Replies
101
Views
5K
Replies
89
Views
13K
Replies
30
Views
6K
Replies
9
Views
2K
Replies
13
Views
1K
Replies
5
Views
1K
Replies
11
Views
8K
Back
Top