General Lorentz Matrix in Terms of Rapidities

In summary, the matrix exponential is a way of multiplying matrices that is supposed to produce the sines and cosines. However, in practice, this function seems to do something else.
  • #1
Passionflower
1,543
0
Does anybody have a reference or can write out the general (so not just a boost in only one direction) Lorentz matrix in terms of rapidities?
 
Physics news on Phys.org
  • #2
Passionflower said:
Does anybody have a reference or can write out the general (so not just a boost in only one direction) Lorentz matrix in terms of rapidities?

Do you mean a boost in an arbitrary direction, or do you mean an arbitrary (restricted) Lorentz transformation (which is not necessarily a boost)? If you mean the former, look at page 541 of the second edition of Jackson.
 
  • #3
Yep, here it is:

[tex]\Lambda = \left( \begin{smallmatrix} \pm 1& 0 \\ 0& \pm \textbf{I} \end{smallmatrix} \right)\left( \begin{smallmatrix} \cosh\eta & -\textbf{n}\sinh\eta\\ -\textbf{n}\sinh\eta& \textbf{I}+\textbf{n}\circ\textbf{n}(\cosh\eta -1) \end{smallmatrix} \right)\left( \begin{smallmatrix} 1&0\\ 0&\textbf{R} \end{smallmatrix} \right)[/tex]
 
  • #4
Thaakisfox said:
Yep, here it is:

[tex]\Lambda = \left( \begin{smallmatrix} \pm 1& 0 \\ 0& \pm \textbf{I} \end{smallmatrix} \right)\left( \begin{smallmatrix} \cosh\eta & -\textbf{n}\sinh\eta\\ -\textbf{n}\sinh\eta& \textbf{I}+\textbf{n}\circ\textbf{n}(\cosh\eta -1) \end{smallmatrix} \right)\left( \begin{smallmatrix} 1&0\\ 0&\textbf{R} \end{smallmatrix} \right)[/tex]

That looks useful. I think I can work out what n is. What is R ?
 
  • #5
Mentz114 said:
That looks useful. I think I can work out what n is. What is R ?

An arbitrary 3x3 (spatial) rotation matrix.
 
  • #6
George Jones said:
An arbitrary 3x3 (spatial) rotation matrix.

Thanks. So for boosts only we can replace R with I, presumably ?
 
  • #7
George Jones said:
Do you mean a boost in an arbitrary direction, or do you mean an arbitrary (restricted) Lorentz transformation (which is not necessarily a boost)? If you mean the former, look at page 541 of the second edition of Jackson.

Hmmmmm. I have the third edition. Perhaps I don't read enough, but that's about the first time I've seen any discussion of this topic that's had any potential for making sense.

Jackson describes S1, S2, and S3, as tensors somehow corresponding to roll, pitch, and yaw, and K1, K2, K3, as tensors corresponding to rapidity change in the x, y, and z directions. (okay, he doesn't use those words, but that's what the math looks like in the end.)

He defines
[tex]\begin{matrix}
L = \omega \cdot S - \zeta \cdot K
\\
A=e^L
\end{matrix}[/tex]​

so that if [itex]\omega=(w,0,0)[/itex] you get a rotation matrix, and if [itex]\zeta = (z,0,0)[/itex] except for one little issue. There's a mysterious lack of imaginary numbers anywhere, but he's still getting cosines and sines when he takes the eL for the [itex]\omega[/itex] part.

It takes a little getting used to, that taking a constant to the power of a matrix gives you a matrix.

Should L be, instead

[tex]L = i \omega \cdot S - \zeta \cdot K[/tex]​

or is this hidden in the notation somewhere?
 
  • #8
JDoolin said:
He defines
[tex]\begin{matrix}
L = \omega \cdot S - \zeta \cdot K
\\
A=e^L
\end{matrix}[/tex]​

so that if [itex]\omega=(w,0,0)[/itex] you get a rotation matrix, and if [itex]\zeta = (z,0,0)[/itex] except for one little issue. There's a mysterious lack of imaginary numbers anywhere, but he's still getting cosines and sines when he takes the eL for the [itex]\omega[/itex] part.

It takes a little getting used to, that taking a constant to the power of a matrix gives you a matrix.

Should L be, instead

[tex]L = i \omega \cdot S - \zeta \cdot K[/tex]​

or is this hidden in the notation somewhere?

The cosines and sines come from taking the matrix exponential. Try computing

[tex]\exp \begin{pmatrix}0 & -\theta \\ \theta & 0\end{pmatrix}[/tex]

and see what happens.
 
  • #9
Ben Niehoff said:
The cosines and sines come from taking the matrix exponential. Try computing

[tex]\exp \begin{pmatrix}0 & -\theta \\ \theta & 0\end{pmatrix}[/tex]

and see what happens.


You gave me the hint I needed by using the word:http://en.wikipedia.org/wiki/Matrix_exponential" .

I'm starting to have some luck using

Code:
S1 = {{0,-Theta}, {Theta, 0}}
A = IdentityMatrix[2]  (*When I tried to do the k=0 term, I got a 0^0 error.*)
For[n = 1, n < 6, n = n + 1,  (*Technically this should go to n->infinity*)
 A = A + S1^n/n!;  (*This is apparently the key to doing the matrix exponential*)
 Print[MatrixForm[A]]]

The software I am using is not doing Matrix multiplication correctly when I take S1n. It is just multiplying term by term. So with the first five terms it came out as follows:

[tex]
\left(
\begin{array}{cc}
1 & -\frac{\theta ^5}{120}+\frac{\theta
^4}{24}-\frac{\theta ^3}{6}+\frac{\theta
^2}{2}-\theta \\
\frac{\theta ^5}{120}+\frac{\theta
^4}{24}+\frac{\theta ^3}{6}+\frac{\theta
^2}{2}+\theta & 1
\end{array}
\right)[/tex]

If I do proper Matrix Multiplication instead of what Mathematica did here, I can see I'd get the sines and cosines.

Question: How is the k=0 determined in these cases? It wouldn't be the four-by-four identity matrix, apparently.

Answer: Here is a working version of the code for the first four terms in the series. I think the http://en.wikipedia.org/wiki/Matrix_exponential" is RIGHT, and the sum should go from 0 to infinity. [STRIKE]But this version of the identity matrix is peculiar to the 2 by 2 case.[/STRIKE]
Code:
Clear["Global`*"]
S = {{0, -T}, {T, 0}};
S0 = {{1, 0}, {0, 1}};
S1 = S
S2 = S.S
S3 = S.S.S
S4 = S.S.S.S
MatrixForm[Expand[S0 + S1 + S2/2 + S3/6 + S4/24]]
TeXForm[MatrixForm[Expand[S0 + S1 + S2/2 + S3/6 + S4/24]]]

The output is

[tex]
\left(
\begin{array}{cc}
\frac{T^4}{24}-\frac{T^2}{2}+1 & \frac{T^3}{6}-T \\
T-\frac{T^3}{6} & \frac{T^4}{24}-\frac{T^2}{2}+1
\end{array}
\right)
[/tex]

These are the first terms in the Maclaurin series expansions for sine and cosine.

I've also found that that Mathematica has a "MatrixExp" command which gives the final value, without all this fiddling with infinite series.
 
Last edited by a moderator:
  • #10
JDoolin said:
The software I am using is not doing Matrix multiplication correctly when I take S1n. It is just multiplying term by term.

Try MatrixPower[S1, n] instead of S1^n.
 
  • #11
Thanks, Rasalhague.

Just for completion,


[tex]
{K1,K2,K3}=
\left(
\begin{array}{cccc}
0 & \beta_x} & 0 & 0 \\
\beta_x & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{array}
\right)
\left(
\begin{array}{cccc}
0 & 0 & \beta_y} & 0 \\
0 & 0 & 0 & 0 \\
\beta_y} & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{array}
\right)
\left(
\begin{array}{cccc}
0 & 0 & 0 & \beta_z \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\beta_z & 0 & 0 & 0
\end{array}
\right)
[/tex]

[tex]

{S1,S2,S3}=
\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & -\theta_1 \\
0 & 0 & \theta_1 & 0
\end{array}
\right)
\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & \theta_2 \\
0 & 0 & 0 & 0 \\
0 & -\theta_2 & 0 & 0
\end{array}
\right)
\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & -\theta_3 & 0 \\
0 & \theta_3 & 0 & 0 \\
0 & 0 & 0 & 0
\end{array}
\right)
[/tex]

[tex]
MatrixExp[S1]=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & \cos (\theta_1)
& -\sin (\theta_1) \\
0 & 0 & \sin (\theta_1)
& \cos (\theta_1)
\end{array}
\right)
[/tex]
[tex]

MatrixExp[K1]=\left(
\begin{array}{cccc}
\frac{1}{2} e^{-\beta_x} \left(e^{2 \beta_x}+1\right) & \frac{1}{2}
e^{-\beta_x}
\left(e^{2 \beta_x}-1\right) & 0 & 0 \\
\frac{1}{2} e^{-\beta_x} \left(e^{2 \beta_x}-1\right) & \frac{1}{2}
e^{-\beta_x}
\left(e^{2 \beta_x}+1\right) & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{array}
\right)
[/tex]

The terms in the last matrix are hyperbolic sines and cosines.

One concern I have about the rotation matrices; the method prescribed seems to be commutative, whereas rotations around different axes are usually considered non-commutative.

(Ah, I see. If you do all your rotations from the body in the original direction, it comes out commutative. If you turn with the body, the rotations are not commutative.)
 
Last edited:
  • #12
Sorry to post in this thread after such a long time, but I find an error in my thinking. I cannot now see how rotations (yaw, pitch, roll) can be commutative, whether you turn along with the object, or not, a change in yaw, followed by a roll, is different by a change in yaw, followed by a change in pitch.

Perhaps the "method prescribed" is not commutative, after all. Quite a relief, actually, since that means the rotations have to be treated individually. But I have to ask, if these aren't commutative, is it even philosophically possible for an object to turn on three axes simultaneously? It seems like it should be, since you could power three motors. But the position of those three motors would each be changing over time.

I would say, sure, it is both physically and philosophically possible to have an object rotating on three axes, but there is no unambiguous single way for that to happen. You would have to choose how to put one axis inside the other.

But on second thought, even then, if the engines were turning inside one another, they would line up on one occasion or another. So perhaps, it may actually be both physically and philosophically impossible for an object to rotate around three axes at once.


Also the equation for K1, K2, K3 got messed up:

[tex]{K1,K2,K3}= \left( \begin{array}{cccc} 0 & \beta_x & 0 & 0 \\ \beta_x & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right) \left( \begin{array}{cccc} 0 & 0 & \beta_y & 0 \\ 0 & 0 & 0 & 0 \\ \beta_y & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right) \left( \begin{array}{cccc} 0 & 0 & 0 & \beta_z \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \beta_z & 0 & 0 & 0 \end{array} \right)[/tex]
 
Last edited:
  • #13
Ahh, I think I have it. A combination of two rotations, one after another, creates a final effect as though the object were rotated around a different axis. The "method prescribed" doesn't imply commutativity.

Two simultaneous rotations, one inside the other, is not the same as two simultaneous rotations, the other inside the one.
 

FAQ: General Lorentz Matrix in Terms of Rapidities

1. What is a Lorentz Matrix in terms of rapidities?

A Lorentz Matrix is a mathematical representation of the transformation between two frames of reference in special relativity, where the relative velocity between the frames is constant. Rapidities, denoted by the Greek letter "eta" (η), are a measure of the relative velocity between the frames and are used in the Lorentz Matrix to calculate the transformation equations.

2. How is the Lorentz Matrix expressed in terms of rapidities?

The Lorentz Matrix is expressed as a 4x4 matrix, with the rapidities ηx, ηy, ηz representing the relative velocities in the x, y, and z directions respectively. The diagonal elements of the matrix are cosh(η) and the off-diagonal elements are sinh(η), where cosh and sinh are hyperbolic functions.

3. What is the significance of rapidity in the Lorentz transformation?

Rapidities play a crucial role in the Lorentz transformation as they are used to calculate the transformation equations for time, length, and momentum between two frames of reference in special relativity. They are also important because they are additive, meaning that the rapidity of an object moving with a certain velocity relative to one frame will be equal to the sum of the rapidities of that object and the frame relative to another frame.

4. How do rapidities relate to relativistic velocities?

Rapidities and relativistic velocities are directly related through the formula v = c*tanh(η), where v is the velocity, c is the speed of light, and tanh is the hyperbolic tangent function. This formula allows for the conversion between rapidities and relativistic velocities, which are both measures of the relative velocity between two frames of reference in special relativity.

5. Can rapidities be negative?

Yes, rapidities can be negative. A negative rapidity indicates a relative velocity in the opposite direction of a positive rapidity, and the magnitude of the negative rapidity will be equal to the magnitude of the positive rapidity. This is because rapidities are a measure of the magnitude of the relative velocity, rather than the direction.

Back
Top