Orthogonal Matricies and rotations.

In summary, Frederiks says that a rotation is an orthogonal transformation with determinant 1, while In summary, Frederiks says that a rotation is an orthogonal transformation with determinant 1, while Artin defines a rotation as an orientation-preserving isometry that fixes a point. Rotations in higher dimensions have "an axis of rotation" - and also how that would be defined in a higher dimensional situation.
  • #1
SprucerMoose
62
0
Hi all,

I have been trying to gain a deeper insight into quadratic forms and have realized that my textbook makes the assumption that an orthogonal matrix corresponds to either a rotation and/or reflection when viewed as a linear transformation. The textbook outlines a proof that demonstrates all norms and dot products (and as a result angles) are invarient under such a transformation. Is this enough information to draw the conclusion that the transformation given by an orthogonal matrix is indeed a reflection/rotation or is there a more rigorous proof of this conjecture?

I have been looking around the interweb but havn't found anything as of yet.
 
Last edited:
Physics news on Phys.org
  • #2
I think it's perfectly acceptable to just take this as the definition of the term "rotation":
A linear operator on a finite-dimensional vector space over [itex]\mathbb R[/itex] is said to be a rotation if it's orthogonal.​
Like with so many other definitions, this is just a way to make an idea mathematically precise, by associating the term we use for the idea with a set with the appropriate properties. (Yes, functions are sets too, if we choose to think of ZFC set theory as the foundation of mathematics).

An alternative would be to define "rotation" in a different way, and then prove that the definition is equivalent to the one above. I have never done that, but I think it would make sense to define a "rotation" as a bijection of [itex]\mathbb R^n[/itex] onto itself that preserves...blah-blah-blah. I haven't really thought this through, so I can't tell you what choice of "blah-blah-blah" is appropriate. I think that the function must at least take straight lines to straight lines, and parallel straight lines to parallel straight lines, but perhaps that's not enough. We may e.g. have to specify that it takes (n-1)-dimensional hyperplanes to (n-1)-dimensional hyperplanes.
 
  • #3
I agree 100% with Frederiks post, but there one small inaccuracy:

Fredrik;3345850 [indent said:
A linear operator on a finite-dimensional vector space over [itex]\mathbb R[/itex] is said to be a rotation if it's orthogonal with determinant 1.[/indent]

I just wanted to be pedantic :biggrin:

As for the issue we're talking about, there might be more intuitive way to define rotations other than just being an element of SO(n).

Reflection with respect to a hyperplane is something that is very easily described. It is basically a transformation f such that [itex]f^2=1[/itex] and such that [itex]dim(\ker(f))=n-1[/itex].

Now we can define a linear transformation g a rotation if there exist reflections f and f' such that [itex]g=f\circ f^\prime[/itex].

Then (I think) it is true that O(n) is the group generated by all the reflections. And SO(n) is the group of all the rotations. But checking this is likely a bit tedious...
 
  • #4
micromass said:
I just wanted to be pedantic :biggrin:
That's a good thing. :smile: Regarding the determinant, rotations are often defined as O(n) transformations, and then members of SO(n) are called proper rotations. What I said is consistent with that terminology, but I should at least have said something about it.
 
  • #5
Thanks a lot guys
 
  • #6
One interesting question is whether rotations in higher dimensional spaces have "an axis of rotation" - and also how that would be defined in a higher dimensional situation. So perhaps defining "rotation" to mean a transformation that preserves distances and angles doesn't quite capture (in the definition itself) everything we expect in a 3D or 2D rotation.
 
  • #7
In Artin's Algebra, he gives a more intuitive definition of a rotation, for [itex] \mathbb{R}^2[/itex] at least. A rotation is an orientation-preserving isometry that fixes a point. (An isometry is just a distance-preserving function). I'm not sure how one would generalize this to higher dimensions, though...
 
  • #8
A rotation axis would be a single direction that a rotation matrix keeps fixed. Let us now see if there are any such directions.

First, find the possible eigenvalues of the rotation and reflection matrices. A direction that stays fixed is an eigenvector for an eigenvalue of 1, while if it gets inverted, then its corresponding eigenvalue is -1.

Consider eigenvalue L and eigenvector x of orthogonal matrix T: T.x = L*x

Consider x*.T.x = L*(x*.x).

The first two terms are TT.x* = T-1.x* = (T-1.x)* = ((1/L)*x)* = (1/L*)*x*

yielding 1/L = L*. This means that |L| = 1 -- every eigenvalue has an absolute value of 1. If L is a non-real eigenvalue, then 1/L = L* is also one, and distinct from L.

Thus, the non-real eigenvalues come in pairs. The only two real ones possible are +1 and -1. A pure rotation matrix's eigenvalues include an even number of -1's, and a reflection one' an odd number of -1's.

For +1, one has to distinguish between even-dimension and odd-dimension. For even dimension, pure rotations have an even number of +1's and reflections an odd number of them. While for odd dimension, pure rotations have an odd number of +1's and reflections an even number of them.

-

2D;
Rotations: eigenvalues w, w*, |w| = 1 (no directions preserved)
Reflections: eigenvalues +1, -1 (1 preserved, 1 flipped)

3D:
Rotations: eigenvalues w, w*, +1 (1 preserved)
Reflections: eigenvalues w, w*, -1 (1 flipped)
The preserved or flipped direction is a well-defined rotation axis.

For 4D rotations, one can have no directions preserved, as with the 2D case, and contrary to the 3D case. If one direction is preserved, then a second one must also be. Thus, 4D rotations have no well-defined rotation axis. This proof is easily extended to higher dimensions.
 
  • #9
Stephen Tashi said:
One interesting question is whether rotations in higher dimensional spaces have "an axis of rotation" - and also how that would be defined in a higher dimensional situation. So perhaps defining "rotation" to mean a transformation that preserves distances and angles doesn't quite capture (in the definition itself) everything we expect in a 3D or 2D rotation.
An axis of rotation is 3D rotation concept. There is no axis of rotation in 2D. There is a point of rotation in 2D. In 4D, the primitive rotations are rotations about the XY, XZ, XW, YZ, YW, and ZW planes (not axes). What happens when you compose rotations in 4D?

In 3D, when you compose rotations, a fixed axis of rotation still exists per Euler's rotation theorem. The 4D analog would be that a composition of 4D rotations will always yield a fixed plane of rotation. Sometimes that happens, sometimes it doesn't. Euler's rotation theorem pertains to 3D only. In 4D there are Clifford rotations (to which lpetrich alluded) that don't preserve anything except the central point.

Things get even weirder in higher dimensions.
 
  • #10
lpetrich said:
For 4D rotations, one can have no directions preserved, as with the 2D case, and contrary to the 3D case. If one direction is preserved, then a second one must also be. Thus, 4D rotations have no well-defined rotation axis. This proof is easily extended to higher dimensions.

Sadly this is not true. Some rotations do have a rotation axis in 5D, consider the rotation:

[tex]\left(\begin{array}{ccccc}
\cos\theta & -\sin \theta & 0 & 0 & 0\\
\sin \theta & \cos \theta & 0 & 0 & 0\\
0 & 0 & \cos \phi & -\sin\phi & 0\\
0 & 0 & \sin \phi & \cos \phi & 0\\
0 & 0 & 0 & 0 & 1\\
\end{array}\right)[/tex]

This has an eigenvector (0,0,0,0,1). So there is an axis of rotation in this case! Not all 5D rotations have an axis of rotation however...
 
  • #11
That's what happens in odd dimensions -- there's at least one preserved direction for rotations and at least one flipped direction for reflections.

In all dimensions, a pair of conjugate eigenvalues is associated with a rotation 2-plane. But in only 3 dimensions is there only one rotation plane, and that plane's normal vector is the rotational axis.
 
  • #12
I started thinking about intuitive definitions of rotations today, and came up with the following: Suppose that f is a permutation of [itex]\mathbb R^3[/itex] that preserves norms of vectors, and angles between pairs of vectors. (This idea was mentioned earlier in the thread. I just didn't realize how easy it is to prove that this implies linearity). Since the angle θ between two arbitrary vectors x and y is given by [tex]\cos\theta=\frac{\langle x,y\rangle}{\|x\|\|y\|},[/tex]
the expression on the right must also be preserved, i.e. we must have [tex]\frac{\langle f(x),f(y)\rangle}{\|f(x)\|\|f(y)\|} =\frac{\langle x,y\rangle}{\|x\|\|y\|},[/tex] which implies [tex]\frac{\langle f(x),f(y)\rangle}{\langle x,y\rangle} =\frac{\|f(x)\|\|f(y)\|}{\|x\|\|y\|}=1.[/tex]
So the inner product is preserved too. [tex]
\begin{align}
\langle f(x),f(ay+bz)\rangle &=\langle x,ay+bz\rangle=a\langle x,y\rangle+b\langle x,z\rangle =a\langle f(x),f(y)\rangle+b\langle f(x),f(z)\rangle\\
&=\langle f(x),af(y)+bf(z)\rangle.
\end{align}
[/tex] Since this holds for all x,y,z and all a,b, f must be linear*. And a linear bijection that preserves norms is by definition an orthogonal transformation.

*) Choose x such that f(x)=f(ay+bz)-af(y)-af(z) and use [itex]\|u\|=0\Rightarrow u=0[/itex].

Edit: I can see that there's something wrong with what I just said, but I don't immediately see where my mistake is. I don't have time to think about it now. I'll return to it later. (The problem is that a permutation of [itex]\mathbb R^2[/itex] that rotates every vector by an angle that depends on the norm isn't linear. Maybe my idea works for [itex]\mathbb R^n[/itex] with n≥3, but I didn't use that inequality in the proof).
 
Last edited:
  • #13
Fredrik said:
The problem is that a permutation of [itex]\mathbb R^2[/itex] that rotates every vector by an angle that depends on the norm isn't linear.

But such a map doesn't preserve angles, does it?
 
  • #14
Ah, of course. I was thinking that it does, but I was only looking at angles between vectors of the same norm. I guess there's no problem then.
 
  • #15
Fredrik's proof looks broadly correct. I'll take a stab at that problem. Consider a bilinear form <x,y> of x and y that satisfies

<x,y> = 0 for all x -> y = 0
<x,y> = 0 for all y -> x = 0

For a finite-dimensional vector

<x,y> = x.g.y or else x.g.y* (conjugate)

where g is some constant matrix. Now break down x by basis: (1,0,0,...) then (0,1,0,...), ... Then

g.y = 0 implies y = 0 if g is invertible
x.g = 0 implies x = 0 if g is invertible

Now consider a function f that preserves that bilinear form:

<f(x),f(y)> = <x,y>

It's equivalent to being angle-preserving. From Fredrik's proof:

<f(x),f(a*y+b*z) - a*f(y) - b*f(z)> = 0

If f ranges over the entire range of x, and g is invertible, then we find that f is linear.

Violation of these conditions means that f is not constrained to be linear -- should be easy to verify.

But if they are satisfied, we find f(x) = R.x Using RT for its transpose, and breaking x and y down by basis, we find

RT.g.R = g
or else
RT.g.R* = g

To be invertible, det(g) != 0
The first case is the orthogonal case: det(R) = +1 (pure rotation) or -1 (rotation + reflection)
The second case is the unitary case: |det(R)| = 1
 

FAQ: Orthogonal Matricies and rotations.

What is an orthogonal matrix?

An orthogonal matrix is a square matrix with real numbers that satisfies the condition: ATA = I, where AT is the transpose of A, and I is the identity matrix. This means that the columns of an orthogonal matrix are orthogonal (perpendicular) to each other and have a length of 1, making it a type of rotation matrix.

How are orthogonal matrices used in rotations?

Orthogonal matrices are used to represent rotations in multi-dimensional spaces. Since the columns of an orthogonal matrix are perpendicular, they can be used to rotate points around the origin without changing their distance from the origin. This makes them useful for geometric transformations in computer graphics and physics simulations.

What is the determinant of an orthogonal matrix?

The determinant of an orthogonal matrix is always either 1 or -1. This is because the determinant of a matrix represents the scaling factor of the transformation it performs, and in the case of orthogonal matrices, the scaling factor is always 1 (no change in distance) or -1 (flipping the orientation).

How are orthogonal matrices related to the concept of orthonormal bases?

An orthonormal basis is a set of vectors that are both orthogonal and normalized (have a length of 1). Orthogonal matrices can be used to transform a set of vectors into an orthonormal basis, and vice versa. This relationship is important in linear algebra and is used in many applications, such as signal processing and data compression.

Can any matrix be orthogonal?

No, not all matrices can be orthogonal. In order for a matrix to be orthogonal, it must satisfy the condition ATA = I, which means it must be a square matrix with perpendicular and normalized columns. However, any square matrix can be transformed into an orthogonal matrix by using the Gram-Schmidt process, which is a method for creating an orthonormal basis from a set of linearly independent vectors.

Back
Top