Explore Geometry of Symmetric Spaces & Lie Groups on PF

In summary: This is a great starting point. What about other manifolds?What other manifolds are there?You mention SU(2). What is the geometry of a SU(2) Lie group?I think this is a great place to start. We'll need to discuss more examples and see how they generalize.
  • #1
garrett
Insights Author
Gold Member
413
47
A few friends have expressed an interest in exploring the geometry of symmetric spaces and Lie groups as they appear in several approaches to describing our universe. Rather than do this over email, I've decided to bring the discussion to PF, where we may draw from the combined wisdom of its denizens.

Whenever possible, I will be adding and referring to material on my related personal research wiki:

http://deferentialgeometry.org

The meta-idea is to have a linear discussion and development (kind of a mini-course) here on PF, while building up the wiki as a reference and personal research tool for this and other physics inquiries. This will provide background material and (hopefully) enforce a consistent mathematical notation for the discussion. I'm hoping this dual resource use will provide the best of both mediums.

The subjects I/we would like to cover in this thread include:

Lie algebra generators, T_A, (using su(2) as simplest nontrivial example)
(matrix representation, Clifford bivectors, or other Clifford elements)
structure coefficients, (maybe Lie algebra roots, weights, and classification)
exponentiation, g = exp(x^A T_A), giving Lie group elements (SU(2) example)
coordinate change, double covers (SU(2) vs SO(3))
symmetry: Killing vector fields related to generator action
local Lie group manifold geometry -- frame, connection, and curvature
symmetric spaces
Kaluza-Klein theory
appearance and incorporation of Higgs scalars
Peter-Weyl theorem and its use for calculating harmonics

And wherever else the discussion takes us. I'd like things to be (perhaps painfully) specific and pedantic -- relying on explicit examples. I'd like to mostly play with SU(2) and SU(3) as the simplest non-trivial and practical examples. What I'm after is to fully describe these groups as manifolds in terms of their local geometry, symmetries, global geometry, harmonics, etc. And show how they can be incorporated into Kaluza-Klein theory.

I'll usually ask questions at the end of posts. Sometimes I'll know the answer, and sometimes I won't. These will either serve as "homework" (I'll wait 'till someone (PF'ers welcome) answers correctly before proceeding) or as open questions hopefully leading to me learning stuff. (If you want to play, it will help if you have Mathematica or Maple available to use -- or it may be possible to do things the hard way.) I'll also happily answer questions (or meta-questions) related to the posts, probably with references to the wiki.

I'm not sure exactly where this will go or how it will evolve as a discussion, but I thought it would be fun to try here on PF. Now I need to add the first post to this zero-eth one...
 
Physics news on Phys.org
  • #2
(1) Lie algebra to Lie group manifold

Look here for related background material:

http://deferentialgeometry.org/#[[Lie group]]

A Lie group, in contrast to any old group, is also a manifold. This manifold can be given a metric, and hence a geometry, such that the flows induced by the Lie algebra generators corresponds to Killing vector fields. It will be good to work this out explicitly for a specific example.

The three Lie algebra generators for su(2) may be represented by 2x2 traceless anti-Hermitian matrices related to the Pauli matrices,
[tex]
\begin{array}{ccc}
T_1 = i \sigma_{1}^{P} = \left[\begin{array}{cc}
0 & i\\
i & 0\end{array}\right] &
T_2 = i \sigma_{2}^{P}=\left[\begin{array}{cc}
0 & 1\\
-1 & 0\end{array}\right] &
T_3 = i \sigma_{3}^{P}=\left[\begin{array}{cc}
i & 0\\
0 & -i \end{array}\right]\end{array}
[/tex]
From the resulting multiplicative relation,
[tex]
T_A \times T_B = \frac{1}{2} \left( T_A T_B - T_B T_A \right) = - \epsilon_{ABC} T_C
[/tex]
the structure coefficients for this Lie algebra are equal to minus the permutation symbol, [itex]C_{AB}{}^C= -\epsilon_{ABC}[/itex]. Also, the trace of two multiplied su(2) generators provides a useful orthogonality relation,
[tex]
\left< T_A T_B \right> = \frac{1}{2} Tr(T_A T_B) = - \delta_{AB}
[/tex]

Near the identity, elements of a Lie group can be approximately represented using coordinates multiplying the corresponding Lie algebra generators,
[tex]
g \simeq 1 + x^i T_i = 1 + T
[/tex]
In which
[tex]
T = x^i T_i = \left[\begin{array}{cc}
i x^3 & i x^1 + x^2\\
i x^1 - x^2 & -i x^3\end{array}\right]
[/tex]
is a general Lie algebra element labeled by coordinates, [itex]x^i[/itex]. In fact, for SU(2), all group elements can be exactly represented by exponentiating Lie algebra elements,
[tex]
g = e^T = 1 + T + \frac{1}{2!} T^2 + \frac{1}{3!} T^3 + ...
[/tex]
This gives all [itex]g[/itex] as 2x2 coordinatized unitary matrices with unit determinant.

The first "homework" question is:

What is this g matrix, explicitly, in terms of these coordinates?

Some hints:

Define a new quantity,
[itex]r = \sqrt{(x^1)^2+(x^2)^2+(x^3)^2}[/itex]
What do you get for arbitrary powers of T?
Use the series expansions for sin and cos of r.
Write the answer in terms of sin and cos of r, and T.
 
  • #3
Ok, I'm going to start to digest this, piece by piece :). (This could get messy if we don't have sub-threads ;o) ).

Flows and Killing vector fields.

I've not seen this explicitly. When I first came across group manifolds and constructed metrics on them it was in terms of the left-invariant one-forms,
[tex]\lambda_a T_a = g^{-1} dg[/tex];
I guess that formally these are referred to as Maurer-Cartan forms. For an N dimensional group there are N of these which form a basis for the manifold,
[tex]ds^2 = \sum_i (\lambda_i)^2 [/tex].
There should also be some vectors dual to these one-forms; how do these relate to the killing vectors fields?

In general there could be up to N(N+1)/2 killing vectors (or is that killing fields?), which come from the infinitessimal isometries of the metric
[tex]\xi_{a;b} + \xi_{b;a} = 0[/tex]
whereas there are only ever going to be N one-forms. :/
 
Last edited:
  • #4
Hi Joe, welcome to Physics Forums.

I was going to get into Killing vectors -- just as soon as someone writes down exactly what g is...

The key expression for calculating the Killing vector fields is going to be:
[tex]
\vec{\xi_A} \underrightarrow{d} g = T_A g
[/tex]
This expresses the fact that the left action of the Lie algebra generator, [itex]T_A[/itex], on group elements is equal to the flow induced on the group manifold by the corresponding Killing vector field, [itex]\vec{\xi_A}[/itex]. Once we know g in terms of coordinates, we can calculate its derivatives and inverse and find [itex]\vec{\xi_A}[/itex] explicitly. I'll go ahead and do that as soon as someone writes down what g is, which should be easy if you play with it for a few minutes.

(If you'd rather have me write out the calculations, instead of tackling illustrative "homework" problems, let me know and I'll just do that.)

We will talk about symmetries of our group manifold. Typically, a group manifold is of higher dimension than it needs to be to have the symmetries corresponding to its Lie algebra. You can reduce this "waste" by "dividing" by proper subgroups to get a "symmetric space." We'll do all this. :)
 
  • #5
Excellent news. I'm working on the form of g right now; I believe I've just seen the trick - even powers of T appear to have a nice form :).

I'm happy to work through the examples for the time being; there's nothing like doing it to learn it.

In the mean time could you perhaps clarify your use of the upper and lower arrows, I can guess their meaning, but it doesn't hurt to be explicit.
 
Last edited:
  • #6
Great!

The related wiki page is here:

http://deferentialgeometry.org/#[[vector-form algebra]]

Explicitly, every tangent vector gets an arrow over it,
[tex]
\vec{v}=v^i \vec{\partial_i}[/tex]
and every 1-form gets an arrow under it,
[tex]\underrightarrow{f} = f_i \underrightarrow{dx^i}[/tex]
These vectors and forms all anti-commute with one another. And the coordinate vector and form basis elements contract:
[tex]
\vec{\partial_i} \underrightarrow{dx^j} = \delta_i^j
[/tex]
so
[tex]
\vec{v} \underrightarrow{f} = v^i f_i
[/tex]
And, in the expression I wrote in the post above,
[tex]
\vec{\xi_A} \underrightarrow{d} g = \xi_A{}^i \partial_i g
[/tex]
in which [itex]\partial_i[/itex] is the partial derivative with respect to the [itex]x^i[/itex] coordinate.

The notation is slightly nonstandard, but looks good and works very well, even when extended to vectors and forms of higher order.
 
  • #7
Ok, here's the answer to the "homework".

We are computing the explicit form of the group element for SU(2) in terms of the generators [itex]T_a[/itex] in the given representation.

We use the power series expansions the for sine and cosine of r,

[tex]
\begin{align*}
\cos(r) & = 1 - \frac{1}{2!}r^2 + \frac{1}{4!}r^4 - \frac{1}{6!}r^6 + \dots \\
\sin(r) & = r - \frac{1}{3!}r^3 + \frac{1}{5!}r^5 - \frac{1}{7!}r^7 + \dots
\end{align*}
[/tex]

and for the exponential of the matrix T:

[tex]
\begin{align*}
e^T & = I + T + \frac{1}{2!}T^2 + \frac{1}{3!}T^3 + \frac{1}{4!}T^4 + \frac{1}{5!}T^5 + \dots \\
& = \left(I + \frac{1}{2!}T^2 + \frac{1}{4!}T^4 + \dots\right) +
\left(T + \frac{1}{3!}T^3 + \frac{1}{5!}T^5 + \dots \right)
\end{align*}
[/tex]

where [itex]I[/itex] is the identity matrix.

We observe that [itex]T.T = -r^2 I[/itex], and that therefore even powers of T take the form

[tex]
T^{2n} = (-1)^n r^{2n} I.
[/tex]

We can substitute this back into the expansion for the exponential to obtain:

[tex]
\begin{align*}
e^T
& = I \left(1 - \frac{1}{2!}r^2 + \frac{1}{4!}r^4 - \frac{1}{6!}r^6 + \dots\right) +
T \left(1 - \frac{1}{3!}r^2 + \frac{1}{5!}r^4 - \frac{1}{7!}r^6 + \dots \right) \\
& = I \cos(r) + \frac{1}{r} T \sin(r)
\end{align*}
[/tex]
 
Last edited:
  • #8
(2) Killing vector fields

Exactly right. So, the matrix expression for a SU(2) element as a function of SU(2) manifold coordinates is
[tex]
g(x) = e^{x^i T_i} = \cos(r) + x^i T_i \frac{\sin(r)}{r}
[/tex]
Since it's a SU(2) element, it has unit determinant and its inverse is its Hermitian conjugate:
[tex]
g^- = \cos(r) - x^i T_i \frac{\sin(r)}{r}
[/tex]

The next thing is to understand the symmetries of the manifold. We can associate a symmetry, or Killing vector field, [itex]\vec{\xi_A}[/itex], with the flow induced by each Lie algebra generator acting from the left:
[tex]
\xi_A{}^i \partial_i g = T_A g
[/tex]
There are also Killing vector fields associated with generators acting from the right:
[tex]
\xi'_A{}^i \partial_i g = g T_A
[/tex]
Notice that the Lie algebra is necessarily the same as the left Killing vector / symmetry algebra under the Lie derivative:
[tex]
(T_A T_B - T_B T_A) g = ( \xi_A{}^i \partial_i \xi_B{}^j \partial_j - \xi_B{}^i \partial_i \xi_A{}^j \partial_j ) g
[/tex]
[tex]
= C_{ABC} T_C g = ( L_{\vec{\xi_A}} \vec{\xi_B} ) \underrightarrow{d} g = C_{ABC} \vec{\xi_C} \underrightarrow{d} g
[/tex]
The sign of the structure coefficients swaps for the "right acting" Killing vector field algebra.

Now I'll go ahead and calculate the set of three "left acting" Killing vector fields over the group manifold. Multiplying the symmetry equation by the inverse group element gives:
[tex]
\xi_A{}^i ( \partial_i g ) g^- = T_A
[/tex]
The three Killing vector fields each have three components, so [itex]\xi_A{}^i[/itex] is a square matrix that can be inverted and multiplied into give
[tex]
( \partial_i g ) g^- = \xi^-_i{}^A T_A
[/tex]
(Note: If we consider this as an equation relating Lie algebra valued 1-forms, it's
[tex]
( \underrightarrow{d} g ) g^- = \underrightarrow{\xi^-}^A T_A
[/tex]
which we'll use later.) We'll next mutliply both sides by [itex]T_B[/itex] and use the orthogonality of our Pauli matrix generators under the matrix trace to get the inverse Killing vector matrix all by itself on one side:
[tex]
\xi^-_i{}^B = - \xi^-_i{}^A < T_A T_B> = - < ( \partial_i g ) g^- T_B >
[/tex]
So now we can just calculate that out explicitly, which is made easy by the nice form of g you found:
[tex]
\xi^-_i{}^B = - < \left( (T_i - x^i) \frac{\sin(r)}{r} + x^i x^j T_j ( \frac{\cos(r)}{r^2} - \frac{\sin(r)}{r^3}) \right) \left( \cos(r) - x^k T_k \frac{\sin(r)}{r} \right) T_B >
[/tex]
The Pauli matrices are traceless, so only a few terms will survive the trace, with the generator orthogonality under the trace used again to give
[tex]
\xi^-_i{}^B = \delta_{iB} \frac{\sin(r)\cos(r)}{r} + x^i x^B ( \frac{1}{r^2} - \frac{\sin(r)\cos(r)}{r^3} )
[/tex]
Inverting this matrix (ow, my head! Mathematica helped.) gives the matrix of Killing vector fields over the SU(2) manifold:
[tex]
\xi_A{}^i = \delta_{iA} \frac{r}{\sin(r)\cos(r)} + x^A x^i ( \frac{1}{r^2} - \frac{1}{r \sin(r) \cos(r)} )
[/tex]
These are the components of the three Killing vector fields over the group manifold associated with the left action of Lie algebra generators on group elements.

Something interesting to note: In this whole post, we never had to use the matrix representation of the generators -- all we needed were the commutation relations and orthogonality under the trace. In fact, if the generators, [itex]T_A[/itex] are thought of as Clifford algebra bivectors, everything we've done works out exactly the same way, without ever looking at a Pauli matrix explicitly. The trace operator, [itex]<>[/itex], is the same (up to a multiplicative factor equal to the matrix dimension) as the Clifford algebra "scalar part" operator. In the next post I can talk about this Clifford algebra stuff (and rotations and double covers) or go on to talk about the frame, metric, and connection on the group manifold. I'll get to the Clifford algebra stuff soon anyway, but it's your choice what we do next. Clifford and rotations -- or metric, frame, and connection?

So... the next "homework" is...
1) Make sure I didn't mess this calculation up anywhere. ;)
2) What, explicitly, are the other three Killing vector fields, [itex]\vec{\xi'_A}[/itex] associated with the right action of the generators?
3) What would you like to see next: Clifford algebra and rotations, or the group manifold metric and geometry?

(This post represents a bit of work (my whole evening, in fact) so feel free to ask questions about it for a bit. And if we can't get the "right corresponding" Killing fields tomorrow, I'll try to do it so we can move on.)
 
Last edited:
  • #9
Where does the expression [itex] \xi_A{}^i \partial_i g = T_A g [/tex] come from, i.e. how do we derive it? Also, there appears to be an implicit assumption that there are as many vector fields as there are generators, that is [itex]i[/itex] and [itex]A[/itex] run over the same indexes. Why is this obvious? In general there could be up to N(N+1)/2 killing vector fields, whereas here we have exactly N.

Also, it looks to me, in Geometric Algebra language, that the left hand side operator is something like: [itex] \xi_A \cdot \nabla [/itex] which is a scalar, however we know that the [itex]T_A[/itex] is going to be a bi-vector... what's going on here?

Other thoughts that come to mind are:

  • Group element parameters, vs coordinates.
    The [itex]x^i[/itex]s parametrise the elements of the group, and we can obviously define these killing fields in terms of them, and so they can also be considered coordinates on the manifold.
  • r is obviously the length of a vector, where the [itex]x^i[/itex] are coordinates in an orthonormal frame. However the appearance of the sine and cosine of this length is a mystery to me, raising the question "When is the length of a vector the same as an angle?".
  • I'm looking forward to seeing how this is manifestly the surface of a 3-sphere.
 
Last edited:
  • #10
garrett said:
The three Killing vector fields each have three components, so [itex]\xi_A{}^i[/itex] is a square matrix that can be inverted and multiplied into give
[tex]
( \partial_i g ) g^- = \xi^-_i{}^A T_A
[/tex]
We'll next mutliply both sides by [itex]T_B[/itex] and use the orthogonality of our Pauli matrix generators under the matrix trace to get the inverse Killing vector matrix all by itself on one side:
[tex]
\xi^-_i{}^B = - \xi^-_i{}^A < T_A T_B> = - < ( \partial_i g ) g^- T_B >
[/tex]

Hmm. I'm not sure about this step. Once you have multiplied to the right by T_B on each side, if you want to take the trace you have to trace the whole thing, i.e. the LHS is:

[tex]
\begin{align*}
( \partial_i g ) g^- & = \xi^-_i{}^A T_A \\
( \partial_i g ) g^- T_B & = \xi^-_i{}^A T_A T_B \\
< ( \partial_i g ) g^- T_B> & = <\xi^-_i{}^A T_A T_B>
\end{align*}
[/tex]

What step to do use to remove the [itex]\xi^-_i{}^A[/itex] matrix from the trace on the right hand side so that you can use the orthogonality condition?

Oh, actually I see it. [itex]\xi^-_i{}^A[/itex] is not a matrix in this expression, it's just a scalar, and so it can be pulled out the front.
 
Last edited:
  • #11
Good questions.

A symmetry is a map from the manifold to itself. A continuos map, or flow, can be visualized as moving the manifold coordinates by a little bit:
[tex]
x^i \rightarrow x'^i = x^i + \epsilon^A \xi_A^i
[/tex]
in which [itex]\vec{\xi} = \epsilon^A \vec{\xi_A} [/itex] is a vector field on the manifold, with "small" parameters, [itex]\epsilon_A[/itex]. Under a flow (also known as a diffeomorphism), a function of manifold points, such as the group element g(x), changes as:
[tex]
g(x) \rightarrow g'(x) = g(x + \epsilon^A \xi_A) \simeq g(x) + \epsilon^A \xi_A^i \partial_i g(x)
[/tex]
to first order via Taylor expansion. Now, there is also a map on group elements induced by the Lie algebra generators:
[tex]
g \rightarrow g' \simeq (1 + \epsilon^A T_A) g = g + \epsilon^A T_A g
[/tex]
(and another map for the group element acting from the other side)
The symmetry relation we want comes from equating the maps induced by the Lie algebra generators with the corresponding diffeomorphisms,
[tex]
\xi_A^i \partial_i g = T_A g
[/tex]
Wha-la.

Now, as for the Clifford algebra question: The group element is an exponential of a bivector, [itex]g = e^T[/itex], so it is a mixed, even graded multivector. Taking its derivative "brings down" a bivector, so there is no grade inconsistency. Grade consistency is a good thing to keep an eye on though, and we'll use it later.

Did the rest of the previous post make sense?
 
Last edited:
  • #12
Yes, exactly, [itex]\xi^-_i{}^A[/itex] is a bunch of scalars labeled by indices, which each run from 1 to 3. You can then think of that as a set of three 1-forms, or as a 3x3 "matrix" -- but not a matrix in the same algebra as g.

By the way, the [itex]g^- = g^{-1}[/itex] notation, indicating an inverse element, comes from Donald Knuth -- I also like it so I stole it.
 
Last edited:
  • #13
Answering your other questions:

For now, [itex]r = \sqrt{x1^2 + x2^2 + x3^2}[/itex] is best thought of as just a notational convenience.

We should see the relationship to spheres when we establish the geometry of the group manifold.

Yes, the group parameters are the group manifold coordinates. The A and i indices are, for now, in the same class and are interchangeable. This will be different when we investigate symmetric spaces.
 
Last edited:
  • #14
garrett said:
A symmetry is a map from the manifold to itself. A continuos map, or flow, can be visualized as moving the manifold coordinates by a little bit:
[tex]
x^i \rightarrow x'^i = x^i + \epsilon^A \xi_A^i
[/tex]
in which [itex]\vec{\xi} = \epsilon^A \vec{\xi_A} [/itex] is a vector field on the manifold, with "small" parameters, [itex]\epsilon_A[/itex].

Ok, I get this. Another way of getting at it is to study what happens to the components of the metric [itex]g_{ij}(x) = g'_{ij}(x')[/itex] (which we've not come to yet, but I'll mention it anyway), as a function of [itex]x[/itex] is transformed into a different set of coordinates (or basis), via [itex]x -> x' = x + \epsilion \xi[/itex]. If we try and find the condition such that that [itex]g_{ij}(x) = g_{ij}(x')[/itex], i.e. the components don't change, we end up with the condition on [itex]\xi[/itex] that I mentioned in an earlier post, namely, [itex]\xi_{a;b} + \xi_{b;a} = 0[/itex]. These are also killing vector fields, or isometries of the metric.

garrett said:
Now, as for the Clifford algebra question: The group element is an exponential of a bivector, [itex]g = e^T[/itex], so it is a mixed, even graded multivector. Taking its derivative "brings down" a bivector, so there is no grade inconsistency. Grade consistency is a good thing to keep an eye on though, and we'll use it later.

Hmm, there is an inconsistency. Acting on it with a scalar derivative [itex]\frac{\partial}{\partial x^i}[/itex], which we appear to be doing, doesn't change the grade at all. I would agree with you if we were contracting it with the vector derivative, [itex]\nabla = e^i \partial_i[/itex]. That's not what's happening here though is it?

Did the rest of the previous post make sense?

2) What, explicitly, are the other three Killing vector fields, associated with the right action of the generators?
3) What would you like to see next: Clifford algebra and rotations, or the group manifold metric and geometry?

Yes, it's making sense. I've not expanded the trace out yet, or calculated the fields associated with the right action. I was hoping to do it tonight, but I'm not going to get the chance it seems.

Let's do the clifford stuff as there is an open question about the grade lowering stuff. I'm going to be out of the country over the weekend, and already know most of the clifford stuff - so it will give the others (are there any others? :) a chance to catch up.

p.s. using [itex]g^-[/itex] to indicate the inverse; I like that. It like it more that it was Knuth's :). I don't use enough of his stuff (directly).
 
Last edited:
  • #15
Hi garrett

This PF is very interresting... just reading the answer let you learn a lot...
 
  • #16
OK, we'll talk about Clifford algebra a bit.

First, to answer the grade consistency question: For our three dimensional Clifford algebra (it's actually $2^3=8$ dimensional, but with three basis vectors) our su(2) group element, g, is a scalar plus a bivector. What grades do you get if you multiply this times an arbitrary bivector? You can't get a four-vector, since there is none in the algebra, so you get... a scalar plus a bivector. The grades match on both sides. Happy?
 
  • #17
Hi Mehdi, welcome to Physics Forums.

I'll try to be back later tonight to relate this group and Lie algebra stuff to Clifford algebra, which you should find interesting.
 
  • #18
All my following comments will be extracted or inspired by articles written on the internet by R.F.J. van Linden. However I will not give the internet address of the articles just to let you comment the theory without being tempted to adopt to quickly the view of the author R.F.J. van Linden.

Van Linden :
” From various points of view a fractal-like universe is described. Unlike in usual fractals, recurring patterns correlate with the number of dimensions in the observation, i.e., zooming out and in occurs by adding or removing dimensions rather than changing scale.”

Van Linden :
… “Some point-shaped being lives on a circle. His limited 1D vision makes him observe the circle as a straight line. To actually see the extrinsic curvature he would need to have 2D vision.”

Van Linden:
“What behaves like a wave in n-dimensions behaves like a particle in
(n-1)-dimensions”
… “...being the basis for wave-particle duality and Heisenberg's uncertainty relations.
So photons behave like particles in 3D and waves in 4D. Mass behaves like particles in 4D and waves in 5D, and so on. The particle nature of a photon results from the way we observe its 4D wave pattern in 3D.”

Mehdi:
“Let’s try to put some equations to theses comments above from the perspective of the Kaluza-Klein theory for example !”
 
  • #19
From Wikipedia encyclopedia:
"Kaluza-Klein theory (or KK theory, for short) is a model which sought to unify classical gravity and electromagnetism, first published in 1921.
It was discovered by the mathematician Theodor Kaluza that if general relativity is extended to a five-dimensional spacetime, the equations can be separated out into ordinary three-dimensions, gravitation, plus an extra set, which is equivalent to Maxwell's equations for the electromagnetic field, plus an extra scalar field known as the dilaton (In theoretical physics, dilaton originally referred to a theoretical scalar field. In 1926, Oskar Klein Oskar klein proposed that the fourth spatial dimension is curled up in a circle of very small radius i.e. that a particle moving a short distance along that axis would return to where it began. The distance a particle can travel before reaching its initial position is said to be the size of the dimension. This, in fact, also gives rise to quantization of charge, as wave directed along a finite axis can only occupy discrete frequencies. (This occurs because electromagnetism is a U(1) symmetry theory and U(1) is simply the group of rotations around a circle."
 
  • #20
Hey Mehdi, thanks for the quotes, but... I'd like to use this forum for a mathematically oriented discussion. It's easy to string words together to make speculative descriptions of physics (and many do), but the real work and understanding comes from putting math together in the right way. Once that's done, you can talk about it a bit (in a way that's backed by the math).

We will get to some of these ideas, mathematically, but we're still several posts (and many calculations) away from Kaluza-Klein theory. But we will get there! I'd just like to build it up step by step.

If you want my opinion on the Van Linden quotes: I think they're mostly worthless. You have to do a heck of a lot of work before you can say anything potentially true and interesting about the universe -- and it's clear that hasn't been done by the author. Of course, that's just my opinion.

If you're eager to get to the real stuff, understanding (and being able to reproduce) the calculations in this thread should be a good start. Once again, just my opinion.
 
  • #21
A Lie algebra L is a linear space spanned by a basis [itex] X_K [/itex], and possessing an antisymmetry product [.,.] that obeys

[tex] [X_i,X_j]=c_{ij}^{k} X_k [/tex]

over some field K, where [.,.] is the antisymmetric Lie product, and real [itex] c_{ij}^{k} [/itex] are the structure constants of the algebra.

Lie algebras can be classified by the structure of their Cartan metric or Killing form.

The Cartan metric is defined by :
[tex] g_{ij} = c_{im}^{n} c_{jn}^{m} [/tex]

The Killing form is defined in terms of adjoint representation of the algebra: Associate with any element A of L a linear transformation adj(A) defined by the left action of the algebra on itself.
For any Y in L, [A, Y] is also in L. We can define the adjoint representation adj(A) by adj(A)Y = [A, Y]

In particular, for fixed k, let [itex] A = X_k[/itex] and represent Y on the algebra basis [itex] X_j [/itex], so that

[tex] Y = y^j X_j [/tex]

then

[tex] adj(X_k) = y^j [/tex]


= [tex] y^j c_{kj}^i X_i [X_k, X_j][/tex]



where the [itex] y^j [/itex] and the [itex] X_i [/itex] transform contragrediently to each other under the group of basis transformations in the algebra.
The adjoint representation of the group is irreducible for any simple Lie group.
 
Last edited:
  • #22
Hey Mehdi,
I don't wish to dissuade you, or discourage your interest, but try to limit your posts a bit, maybe? I'm trying to introduce things at an elementary level, with very explicit and illustrative examples. It doesn't help things to have you quoting random snippets from other sources. Keep in mind that I've digested ALL this stuff, and what I'm trying to do is present it in a way that's especially coherent -- with an aim for exactly where I want to go many posts in the future. Specifically, I won't be using that choice of metric you posted.

Maybe try to do the "homework" problems I wrote, instead? :) I presented them to be learned from.

But as long as you bring it up, I want to change something I said in the first post: Becuse of the unconventional [itex]\frac{1}{2}[/itex] floating around in my anti-symmetric bracket, my statement of the structure coefficients should have been [itex]C_{AB}{}^C=-2 \epsilon_{ABC}[/itex]. My mistake.

Anyway, I'm thinking up the best way to present Clifford algebra in the Lie algebra context -- will post that soon.

Also, soon enough, we'll get to an area where I won't have answers in mind, and will probably open things up to new directions from others.

Thanks,
Garrett
 
  • #23
garrett said:
Hey Mehdi,
I don't wish to dissuade you, or discourage your interest, but try to limit your posts a bit, maybe? I'm trying to introduce things at an elementary level, with very explicit and illustrative examples.
Garrett, I wanted to thank you for that, This thread will b highly beneficial to me. I want to learn and understand all that stuff but I find frustrating that explicit calculations are never shown (I have never seen any book that shows explicitly all the calculations worked out for a few diffreent groups. There may be some but I am not aware of them).

So your efforts in presenting *explicit* calculations and building slowly the material is highly appreciated! I had not noticed the thread before but I will start going over it this weekend. The bad news is that you will have tons of questions from me :redface:


Patrick
 
  • #24
Thanks Patrick -- this is exactly the issue I'm trying to remedy with this thread, and with the wiki. More than any other method, I learn best by studying the simplest non-trivial examples behind any concept in detail.

So now I'd like to take a tangent into Clifford algebra, which will immediately be related to the su(2) Lie algebra example we've started, and come in very handy later when we work out its geometry.

There are two really nice things about Clifford algebra which draw people to it. The first is that it's a "geometric" algebra: two vectors (grade 1) multiply to give a scalar (grade 0) plus a bivector (grade 2), a vector and a bivector multiply to give a vector plus a trivector (grade 3), etc. The second really nice thing is how rotations are calculated -- bivectors are crossed with any element to rotate it in the plane of that bivector -- which is much nicer than building rotation matrices, especially in higher dimensions. There's also a third, more obscure reason to like Clifford algebra -- it is needed to describe spinors, which are the fields needed to describe fermions. Anyway, on to the simplest nontrivial example...

The Clifford algebra of three dimensional space:

http://deferentialgeometry.org/#[[three dimensional Clifford algebra]]

This algebra, [itex]Cl_3[/itex], is generated by all possibly multiplicative combinations of three basis vectors, [itex]\sigma_\iota[/itex]. These basis vectors have a matrix representation as the three Pauli matrices, [itex]\sigma_\iota = \sigma_\iota^P[/itex] given earlier in this thread, with matrix multiplication equivalent to Clifford multiplication. The eight Clifford basis elements are formed by all possible products of these Clifford basis vectors. They are the the scalar, 1 (equivalent to the 2x2 identity matrix) the three basis vectors, [itex]\sigma_1, \sigma_2, \sigma_3[/itex], the three bivectors, [itex]\sigma_{12}, \sigma_{13}, \sigma_{23}[/itex], and the psuedoscalar, [itex]\sigma = \sigma_1 \sigma_2 \sigma_3[/itex] (equivalent to the 2x2 identity matrix times the unit imaginary, [itex]i.[/itex]). The complete multiplication table for the algebra is (row header times column header equals entry):

[tex]
\begin{array}{cccccccccc}
& | & 1 & \sigma_1 & \sigma_2 & \sigma_3 & \sigma_{12} & \sigma_{13} & \sigma_{23} & \sigma \\
- & + & - & - & - & - & - & - & - & - \\
1 & | & 1 & \sigma_1 & \sigma_2 & \sigma_3 & \sigma_{12} & \sigma_{13} & \sigma_{23} & \sigma \\
\sigma_1 & | & \sigma_1 & 1 & \sigma_{12} & \sigma_{13} & \sigma_2 & \sigma_3 & \sigma & \sigma_{23} \\
\sigma_2 & | & \sigma_2 & -\sigma_{12} & 1 & \sigma_{23} & -\sigma_1 & -\sigma & \sigma_3 & -\sigma_{13} \\
\sigma_3 & | & \sigma_3 & -\sigma_{13} & -\sigma_{23} & 1 & \sigma & -\sigma_1 & -\sigma_2 & \sigma_{12} \\
\sigma_{12} & | & \sigma_{12} & -\sigma_2 & \sigma_1 & \sigma & -1 & -\sigma_{23} & \sigma_{13} & -\sigma_3 \\
\sigma_{13} & | & \sigma_{13} & -\sigma_3 & -\sigma & \sigma_1 & \sigma_{23} & -1 & -\sigma_{12} & \sigma_2 \\
\sigma_{23} & | & \sigma_{23} & \sigma & -\sigma_3 & \sigma_2 & -\sigma_{13} & \sigma_{12} & -1 & -\sigma_1 \\
\sigma & | & \sigma & \sigma_{23} & -\sigma_{13} & \sigma_{12} & -\sigma_3 & \sigma_2 & -\sigma_1 & -1 \\
\end{array}
[/tex]

(Things don't get more explicit than that. ;)

The whole table may be reproduced from the fundamental rules of Clifford algebra: Start with a set of basis vectors, like [itex]\sigma_i[/itex], which may be visualized as an orthonormal set. Multiplying two identical vectors gives 1, like [itex]\sigma_1 \sigma_1 = 1[/itex] (or gives -1 for some Lorentz geometry vectors (to come later)). Otherwise, vectors anti-commute, like [itex]\sigma_1 \sigma_2 = - \sigma_2 \sigma_1 = \sigma_{12}[/itex]. That's it! The other rules are the familiar associative and distributive rules for multiplication and addition.

It is also very useful to break this product into symmetric (dot) and antisymmetric (cross) products:
[tex]
A \cdot B = \frac{1}{2} (A B + B A)
[/tex]
[tex]
A \times B = \frac{1}{2} (A B - B A)
[/tex]

Now we find su(2) in here... the subalgebra formed by the three bivectors under the cross product is the su(2) Lie algebra. The identification of generators is
[tex]
\begin{array}{ccc}
T_1 = i \sigma_{1}^{P} = \sigma_{23},
&
T_2 = i \sigma_{2}^{P} = -\sigma_{13},
&
T_3 = i \sigma_{3}^{P} = \sigma_{12}
\end{array}
[/tex]
and looking at the multiplication table shows this subalgebra has the same structure coefficients as su(2), and is therefore equivalent.

Now we look at the SU(2) element Joe calculated earlier:
[tex]
g = e^{x^A T_A} = \cos(r) + x^A T_A \frac{\sin(r)}{r}
[/tex]
and see that this, which we interpreted before as a 2x2 matrix, is a mixed grade Clifford element.

Next I want to use this to do 3D rotations. But first, a real quick question to make sure you're awake:

I said the g above are of "mixed grade" -- what exactly are the grades in g? (Choose from {0,1,2,3})

Someone answer this and I'll go on to rotations. :)

(And I'm still hoping someone will calculate the Killing vectors corresponding to right acting generators -- it will be important for the SU(2) geometry)
 
Last edited:
  • #25
garrett said:
So now we can just calculate that out explicitly, which is made easy by the nice form of g you found:
[tex]
\xi^-_i{}^B = - < \left( (T_i - x^i) \frac{\sin(r)}{r} + x^i x^j T_j ( \frac{\cos(r)}{r^2} - \frac{\sin(r)}{r^3}) \right) \left( \cos(r) - x^k T_k \frac{\sin(r)}{r} \right) T_B >
[/tex]
The Pauli matrices are traceless, so only a few terms will survive the trace, with the generator orthogonality under the trace used again to give
[tex]
\xi^-_i{}^B = \delta_{iB} \frac{\sin(r)\cos(r)}{r} + x^i x^B ( \frac{1}{r^2} - \frac{\sin(r)\cos(r)}{r^3} )
[/tex]

What about the higher order terms? There also appear to be terms proportional to [itex] < T_i T_j T_k > = 2 \epsilon_{ijk} [/itex]. Why are we neglecting these?
 
  • #26
garrett said:
Answering your other questions:

For now, [itex]r = \sqrt{x1^2 + x2^2 + x3^2}[/itex] is best thought of as just a notational convenience.

Ok, but there are surely some bounds on the validity of group element then as we expanded in a power series the answer is only going to be valid for small x and small r; the series will break down for large coordinates.
 
  • #27
Taoy said:
What about the higher order terms? There also appear to be terms proportional to [itex] < T_i T_j T_k > = 2 \epsilon_{ijk} [/itex]. Why are we neglecting these?

Because I missed that term! You're right, I thought those would all drop out, but they don't -- one of them does survive. ( By the way, becuase of the way I defined <> with a half in it, it's [itex] < T_i T_j T_k > = \epsilon_{ijk} [/itex] ) So, the correct expression for the inverse Killing vector field should be
[tex]
\xi^-_i{}^B = - < \left( (T_i - x^i) \frac{\sin(r)}{r} + x^i x^j T_j ( \frac{\cos(r)}{r^2} - \frac{\sin(r)}{r^3}) \right) \left( \cos(r) - x^k T_k \frac{\sin(r)}{r} \right) T_B >
[/tex]
[tex]
= \delta_{iB} \frac{\sin(r)\cos(r)}{r} + x^i x^B ( \frac{1}{r^2} - \frac{\sin(r)\cos(r)}{r^3} ) + \epsilon_{ikB} x^k \frac{\sin^2(r)}{r^2}
[/tex]

Thanks for catching that! ( It's why I asked question (1) )

And now I have to go figure out what the inverse of that is...
 
  • #28
Taoy said:
Ok, but there are surely some bounds on the validity of group element then as we expanded in a power series the answer is only going to be valid for small x and small r; the series will break down for large coordinates.

The expression you calculated,
[tex]
g(x) = e^{x^i T_i} = \cos(r) + x^i T_i \frac{\sin(r)}{r}
[/tex]
is a perfectly valid element of SU(2) for all values of x. Go ahead and multiply it times its Hermitian conjugate and you'll get precisely 1.

There is something interesting going on with the domain of the x though, so I'm glad you brought it up. The expression for g is periodic in the x. This is best seen by setting two x's to 0 while letting the other range from 0 to [itex]2 \pi[/itex], at which point g is the identity again. Now, to cover all points of SU(2) exactly once, it may be the case that all three x's range from 0 to [itex]2 \pi[/itex], and that does it -- but I kind of doubt that's true. What I've done in the past is convert the x's to angular coordinates,
[tex]
x^{1} = r\sin(\theta)\cos(\phi)
[/tex]
[tex]
x^{2} = r\sin(\theta)\sin(\phi)
[/tex]
[tex]
x^{3} = r\cos(\theta)
[/tex]
which simplifies things a little. But I wanted to try staying in x coordinates for now.
 
Last edited:
  • #29
garrett said:
[tex]
\xi^-_i{}^B
= \delta_{iB} \frac{\sin(r)\cos(r)}{r} + x^i x^B ( \frac{1}{r^2} - \frac{\sin(r)\cos(r)}{r^3} ) + \epsilon_{ikB} x^k \frac{\sin^2(r)}{r^2}
[/tex]
And now I have to go figure out what the inverse of that is...

[tex]
\xi_B{}^i
= \delta_{Bi} \frac{r \cos(r)}{\sin(r)} + x^B x^i ( \frac{1}{r^2} - \frac{\cos(r)}{r \sin(r)} ) + \epsilon_{Bik} x^k
[/tex]

:)

By the way, if you're trying to do this yourself by hand, I calculated the inverse by making the ansatz:
[tex]
\xi_B{}^i
= \delta_{Bi} A + x^B x^i B + \epsilon_{Bik} x^k C
[/tex]
and solving for the three coefficients.

Now I'm going for a bike ride, then coming back to do rotations.
 
  • #30
Originally Posted by Garrett :
What I've done in the past is convert the x's to angular coordinates,
[tex]x^{1} = r\sin(\theta)\cos(\phi)[/tex]
[tex]x^{2} = r\sin(\theta)\sin(\phi)[/tex]
[tex]x^{3} = r\cos(\theta)[/tex]
which simplifies things a little. But I wanted to try staying in x coordinates for now.

It look like if we have parametrized the coordinates by means of angles [itex]\theta[/itex] and [itex]\phi[/itex].
It is related to the condition [itex]{({x^1})^2 + ({x^2})^2+ ({x^3})^2}=1[/itex].
IF [itex]x^i[/itex] are interpreted as coordinates in a space [itex]R^3[/itex], this condition describes the unit sphere [itex]S^2[/itex] embedded in that space.
The sphere [itex]S^2[/itex] is a smooth manifold, every closed curve on it can be contracted to a point, it is singly connected.

But when we use the above parametrization, which map are we defining :
[itex]S^2[/itex] onto SO(3) or [itex]S^2[/itex] onto SU(2) ? or maybe we have to use one more parametrization which parametrize the angles [itex]\theta[/itex] and [itex]\phi[/itex] by means of angles [itex]\alpha[/itex] and [itex]\beta[/itex] for example to identify opposite antipodal points on the sphere ?
 
Last edited:
  • #31
Hey Mehdi, nice question. Using this angular parameterization, with a constant r, we have a map from S2 into SU(2). When I show the map from SU(2) to SO(3), (rotations) we'll see that this S2 corresponds to the orientation of the plane of the rotation, and the r value corresponds to the rotation amplitude, or angle.
 
  • #32
rotations

Alright, we have finally come around to rotations. Let's make a rotation using Clifford algebra. First, what do you get when you cross a vector with a bivector? Starting with an arbitrary vector,
[tex]
v = v^1 \sigma_1 + v^2 \sigma_2 + v^3 \sigma_3
[/tex]
and, for example, a "small" bivector in the xy plane,
[tex]
B = \epsilon \sigma_{12}
[/tex]
their cross product gives
[tex]
v \times B = \epsilon ( v^1 \sigma_1 \times \sigma_{12} + v^2 \sigma_2 \times \sigma_{12} + v^3 \sigma_3 \times \sigma_{12})

= \epsilon ( v^1 \sigma_2 - v_2 \sigma_1)
[/tex]
This new vector, [itex]v \times B[/itex], is perpendicular to v, and in the plane of B. This "small" vector is the one that needs to be added to v in order to rotate it a small amount counter-clockwise in the plane of B:
[tex]
v' \simeq v + v \times B \simeq (1 + \frac{1}{2} \epsilon \sigma_{12}) v (1 - \frac{1}{2} \epsilon \sigma_{12})
[/tex]
where the "[itex]\simeq[/itex]" holds to first order in [itex]\epsilon[/itex]. Infinitesimal rotations like these can be combined to give a finite rotation,
[tex]
v' = \lim_{N \rightarrow \infty} (1+ \frac{1}{N} \frac{1}{2} \theta \sigma_{12}) v ((1- \frac{1}{N} \frac{1}{2} \theta \sigma_{12})

= e^{\frac{1}{2} \theta \sigma_{12}} v e^{-\frac{1}{2} \theta \sigma_{12}} = U v U^-
[/tex]
using the "limit" definition for the exponential. This is an exact expression for the rotation of a vector by a bivector. In three dimensions an arbitrary bivector, B, can be written as
[tex]
B = \theta b
[/tex]
an amplitude, [itex]\theta[/itex], multiplying a unit bivector encoding the orientation, [itex]bb=-1[/itex]. The exponential can then be written using Joe's expression for exponentiating a bivector:
[tex]
U = e^{\frac{1}{2} B} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)
[/tex]
And an arbitrary rotation in any plane can be expressed efficiently as [itex]v' = UvU^-[/itex]. For example, for a rotation of an arbitrary vector by [itex]B=\theta \sigma_{12}[/itex], the result (using some trig identities) is:
[tex]
v' = e^{\frac{1}{2} B} v e^{-\frac{1}{2} B}
= (\cos(\frac{1}{2} \theta) + \sigma_{12} \sin(\frac{1}{2} \theta) (v^1 \sigma_1 + v^2 \sigma_2 + v^3 \sigma_3) (\cos(\frac{1}{2} \theta) - \sigma_{12} \sin(\frac{1}{2} \theta)
[/tex]
[tex]
= (v^1 \cos(\theta) + v^2 \sin(\theta) ) \sigma_1 + (v^2 \cos(\theta) - v^1 \sin(\theta)) \sigma_2 + v^3 \sigma_3
[/tex]
This is widely considered to be pretty neat, and useful as a general method of expressing and calculating rotations.

Now, we already established that elements of the group SU(2) may be represented as exponentials of bivectors, so these U are SU(2) elements! The "double cover" relationship between SU(2) and rotations (the group SO(3)) is in the expression
[tex]
v' = UvU^-
[/tex]
It is the fact that two different SU(2) elements, U and -U, give the same rotation. That's all there is to it.

To be painfully explicit, it is possible to relate all this to rotation matrices. A rotation matrix is a 3x3 special orthogonal matrix that transforms one set of basis vectors into another. This equates to the Clifford way of doing a rotation as:
[tex]
\sigma'_i = L_i{}^j \sigma_j = U \sigma_i U^-
[/tex]
For any rotation encoded by U (which, as the exponential of a bivector, also represents an arbitrary SU(2) element), the corresponding rotation matrix elements may be explicitly calculated using the trace as
[tex]
L_i{}^j = \left< U \sigma_i U^- \sigma_j \right>
[/tex]

Using Clifford algebra, you think of a rotation as being in a plane (or planes), described by a bivector. This generalizes very nicely to dimensions higher than three, such as for Lorentz transformations and for rotations in Kaluza-Klein theory.

It's a little odd if you haven't seen it before -- any questions?
 
  • #33
Garrett it is beautiful... I have no question... it is well explained and therefore easy to understand.

You have successfully established a relation between SU(2), SO(3), rotation matrix and Clifford algebra [itex]Cl_{ 0,2 }(R)[/itex]... (Spin(3) group is the universal covering group of SO(3) ?!? and an accidental isomorphims with SU(2) and Sp(1) ?!?).

Maybe one day, you could do the same with an other group, let's say, the symplectic group and it's relation to Clifford algebras (using Lagrangians or Hamiltonians to make the examples more explicit)... Garrett…it's only a wish... ;)
 
Last edited:
  • #34
Originally Posted by garrett:
The exponential can then be written using Joe's expression for exponentiating a bivector:
[tex]U = e^{\frac{1}{2} B} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)[/tex]
And an arbitrary rotation in any plane can be expressed efficiently as [itex]v' = U v U^-[/itex].
Can we then say that U is a rotor ?
If U is a rotor, we can then say that this rotor is an element of SU(2) group.
 
Last edited:
  • #35
Yes, you can call it a rotor, but that's kind of an old term. The more modern description is that it's an element of the [itex]Spin[/itex] group, and in this 3D case, [itex]Spin(3)=SU(2)[/itex].

Here's a wikipedia reference (good reading!):
http://en.wikipedia.org/wiki/Spin_group
 

Similar threads

Back
Top