Square of orthogonal matrix vanishes

In summary, we discussed the expression for kinetic energy in terms of a rotation matrix ##R## and its invariance under rotations, and derived the relevant equation for velocity in terms of ##R## and angular velocity. We also demonstrated the equivalence between your derived equation and the relevant equation.
  • #1
PhysicsRock
117
18
Homework Statement
We consider a coordinate transform where ##\vec{x}^\prime(t) = R(t) (\vec{a} + \vec{x}(t))## with a constant ##\vec{a}##.
Write the lagrangian in terms of ##\vec{x}## and ##\dot{\vec{x}}##.
Relevant Equations
Velocity in terms of ##\dot{\vec{x}}## and ##\vec{x}##: ##\dot{\vec{x}}^\prime = R \left[ \dot{\vec{x}} + \vec{\omega} \times (\vec{a} + \vec{x}) \right]##.
Lagrangian function ##L = T - V##.
I found a the answer in a script from a couple years ago. It says the kinetic energy is

$$
T = \frac{1}{2} m (\dot{\vec{x}}^\prime)^2 = \frac{1}{2} m \left[ \dot{\vec{x}} + \vec{\omega} \times (\vec{a} + \vec{x}) \right]^2
$$

However, it doesn't show the rotation matrix ##R##. This would imply that ##R^2 = R \cdot R = I##. ##R## is an orthogonal matrix, but I'm pretty sure that the square of such is not always equal to the identity.

So then how come the matrix doesn't show up in the expression for the kinetic energy?
 
Physics news on Phys.org
  • #2
PhysicsRock said:
So then how come the matrix doesn't show up in the expression for the kinetic energy?
The dot product ##\vec{z}\cdot\vec{z}## is shorthand for the matrix multiplication ##z^{T}z##, where ##z## is a column vector. So under a rotation by ##R##, $$\vec{z\,}'\equiv\overleftrightarrow{R}\cdot\vec{z}=R\,z\Rightarrow\vec{z}\,'\cdot\vec{z}\,'=\left(z'\right)^{T}z'=z^{T}R^{T}R\,z=z^{T}z=\vec{z}\cdot\vec{z}$$since ##R## is an orthogonal matrix. (This expresses the invariance of the dot-product under rotations.)
 
  • #3
PhysicsRock said:
Homework Statement: We consider a coordinate transform where ##\vec{x}^\prime(t) = R(t) (\vec{a} + \vec{x}(t))## with a constant ##\vec{a}##.
Write the lagrangian in terms of ##\vec{x}## and ##\dot{\vec{x}}##.
Relevant Equations: Velocity in terms of ##\dot{\vec{x}}## and ##\vec{x}##: ##\dot{\vec{x}}^\prime = R \left[ \dot{\vec{x}} + \vec{\omega} \times (\vec{a} + \vec{x}) \right]##.
Lagrangian function ##L = T - V##.
Diffenciating ##\vec{x}^\prime(t) = R(t) (\vec{a} + \vec{x}(t))## by time, I get
[tex]\dot{\vec{x}^\prime(t)} = \dot{R(t)} (\vec{a} + \vec{x}(t))+R(t) \dot{\vec{x}(t)}[/tex]
. Is it same as your relevant equation ?
 
  • #4
renormalize said:
The dot product ##\vec{z}\cdot\vec{z}## is shorthand for the matrix multiplication ##z^{T}z##, where ##z## is a column vector. So under a rotation by ##R##, $$\vec{z\,}'\equiv\overleftrightarrow{R}\cdot\vec{z}=R\,z\Rightarrow\vec{z}\,'\cdot\vec{z}\,'=\left(z'\right)^{T}z'=z^{T}R^{T}R\,z=z^{T}z=\vec{z}\cdot\vec{z}$$since ##R## is an orthogonal matrix. (This expresses the invariance of the dot-product under rotations.)
That makes sense. Thank you.
 
  • #5
anuttarasammyak said:
Diffenciating ##\vec{x}^\prime(t) = R(t) (\vec{a} + \vec{x}(t))## by time, I get
[tex]\dot{\vec{x}^\prime(t)} = \dot{R(t)} (\vec{a} + \vec{x}(t))+R(t) \dot{\vec{x}(t)}[/tex]
. Is it same as your relevant equation ?
Yes. Allow me to demonstrate. We start with your expression and factor out an ##R##. Since it is orthogonal that leads us to

$$
\dot{\vec{x}}^\prime = R \left[ \dot{\vec{x}} + R^T \dot{R} ( \vec{a} + \vec{x} ) \right]
$$

Last semester, we derived that ##(R^T \dot{R})_{ij} = - \epsilon_{ijk} \omega_k##, where ##\omega_k## are the components of angular velocity. Now we plug that in and get

$$
\dot{x}^\prime_{i} = R_{ij} ( \dot{x}_j + (R^T \dot{R})_{jk} (a_k + x_k) )
= R_{ij} ( \dot{x}_j + (-\epsilon_{jkl} \omega_l) (a_k + x_k) )
$$

Recall the definition of the vector product ##(\vec{a} \times \vec{b})_i = \epsilon_{ijk} a_j b_k##. With that we obtain

$$
\dot{\vec{x}}^\prime = R ( \dot{\vec{x}} - (\vec{a} + \vec{x}) \times \vec{\omega} )
$$

Since the vector product is antisymmetric, we can alternatively write

$$
\dot{\vec{x}}^\prime = R ( \dot{\vec{x}} + \vec{\omega} \times (\vec{a} + \vec{x}) )
$$

And we're done.
 
  • Like
Likes anuttarasammyak

FAQ: Square of orthogonal matrix vanishes

What is an orthogonal matrix?

An orthogonal matrix is a square matrix whose rows and columns are orthonormal vectors, meaning that the matrix multiplied by its transpose results in the identity matrix. Mathematically, a matrix \( Q \) is orthogonal if \( Q^T Q = QQ^T = I \), where \( Q^T \) is the transpose of \( Q \) and \( I \) is the identity matrix.

What does it mean for the square of a matrix to vanish?

The square of a matrix vanishes if, when the matrix is multiplied by itself, the result is the zero matrix. Mathematically, for a matrix \( A \), the square of the matrix vanishes if \( A^2 = A \cdot A = 0 \), where \( 0 \) is the zero matrix.

Can an orthogonal matrix have its square vanish?

In general, an orthogonal matrix cannot have its square vanish. If \( Q \) is an orthogonal matrix, then \( Q^T Q = I \). If \( Q^2 = 0 \), then multiplying both sides by \( Q^T \) would yield \( Q^T Q^2 = Q^T 0 \) which simplifies to \( I Q = 0 \), leading to a contradiction since the identity matrix \( I \) cannot be the zero matrix.

Are there any special cases where the square of an orthogonal matrix vanishes?

No, there are no special cases where the square of an orthogonal matrix vanishes. The property of orthogonality ensures that the matrix has full rank and its eigenvalues are of unit modulus, which precludes the possibility of the matrix squared resulting in a zero matrix.

What are the implications of an orthogonal matrix in linear algebra?

Orthogonal matrices have several important properties and implications in linear algebra. They preserve the Euclidean norm, meaning they are isometries. They also have eigenvalues with absolute value 1, which implies they are stable under repeated multiplications. Orthogonal matrices are used in various applications including QR decomposition, solving linear systems, and in computer graphics for rotations and reflections.

Back
Top