Explore Geometry of Symmetric Spaces & Lie Groups on PF

In summary: This is a great starting point. What about other manifolds?What other manifolds are there?You mention SU(2). What is the geometry of a SU(2) Lie group?I think this is a great place to start. We'll need to discuss more examples and see how they generalize.
  • #36
Hi Garrett

Can we then say that the quaternion of norm 1 belong to SU(2) group ?
 
Physics news on Phys.org
  • #37
I know that spinors are related to quaternions... tomorrow I will try to find the link between them...
 
  • #38
I messed up a couple of expressions in the last math post.

First, all incidences of "[itex]v \times B[/itex]" should be "[itex]B \times v[/itex]" with a corresponding change of sign where relevant.

Second, the expression for the limit of many infinitesimal rotations should be
[tex]
\lim_{N \rightarrow \infty} \left( 1+ \frac{1}{N} \frac{1}{2} \theta \sigma_{12} \right)^N v \left( 1- \frac{1}{N} \frac{1}{2} \theta \sigma_{12} \right)^N
[/tex]

Apologies.
 
  • #39
Mehdi_ said:
Can we then say that the quaternion of norm 1 belong to SU(2) group?
Yes.

The three basis quaternions are the same as the SU(2) generators, which are the same as the Cl_3 bivectors. The quaternion and/or SU(2) group element, U, is represented by coefficients multiplying these, plus a scalar. And U satisfies [itex]UU^\dagger = 1[/itex].

I know that spinors are related to quaternions... tomorrow I will try to find the link between them...
Heh. Read my last paper. :)
But a discussion of spinors shouldn't go in this thread (yet). Maybe start another one?
 
Last edited:
  • #40
garrett said:
Great!

The related wiki page is here:

http://deferentialgeometry.org/#[[vector-form algebra]]

Explicitly, every tangent vector gets an arrow over it,
[tex]
\vec{v}=v^i \vec{\partial_i}[/tex]
and every 1-form gets an arrow under it,
[tex]\underrightarrow{f} = f_i \underrightarrow{dx^i}[/tex]
These vectors and forms all anti-commute with one another. And the coordinate vector and form basis elements contract:
[tex]
\vec{\partial_i} \underrightarrow{dx^j} = \delta_i^j
[/tex]
so
[tex]
\vec{v} \underrightarrow{f} = v^i f_i
[/tex]

Sorry for taking so much time to absorb all of this but although I have heard all the terms mentioned in this thread, I am still learning all that stuff.

A quick question: what do you mean by "the vectors and forms all anticommute with one another"??
Ithought that one could think of "feeding" a vector to a one-form or vice-versa and that the result was the same in both cases. I guess I don't see where anticommutation might arise in that situation. Could you explain this to me?

Thanks again for a great thread!

Patrick
 
  • #41
These vectors and forms all anti-commute with one another should means:
[tex]\vec{v}=v^i \vec{\partial_i}=-\vec{\partial_i}v^i[/tex]
[tex]\underrightarrow{f} = f_i \underrightarrow{dx^i}=-\underrightarrow{dx^i}f_i[/tex]

That means that order is important... it is a non-commutative algebra
 
Last edited:
  • #42
[itex]\gamma_1[/itex] and [itex]\gamma_2[/itex] are perpendicular vectors

We start with a vector v equal to [itex]\gamma_1[/itex] and form another v' by adding a tiny displacement vector in a perpendicular direction :

[itex]v=\gamma_1 [/itex]and [itex]v'=\gamma_1+\epsilon\gamma_2[/itex]

and similarly, We start now with a vector v equal to [itex]\gamma_2[/itex] and form another v' by adding a tiny displacement vector in a perpendicular direction :

[itex]v=\gamma_2[/itex] and [itex]v'=\gamma_2-\epsilon\gamma_1[/itex]

The minus sign occurs because the bivectors [itex]\gamma_1\gamma_2[/itex] and [itex]\gamma_2\gamma_1[/itex] induce rotations in opposite directions

Let's construct a rotor r as follow:
[tex]r=vv'=\gamma_1(\gamma_1+\epsilon\gamma_2)=(1+\epsilon\gamma_1\gamma_2) [/tex]

Let’s see what happens when we use this rotor to rotate something with N copies of an infinitesimal rotation:

[tex]v'= {(1+\epsilon\gamma_2\gamma_1)}^Nv{(1+\epsilon\gamma_1\gamma_2)}^N[/tex]

But in the limit:

[tex]{(1+\epsilon\gamma_2\gamma_1)}^N=exp(N\epsilon\gamma_1\gamma_2)[/tex]
[tex]=exp(\theta\gamma_1\gamma_2)=1+\theta\gamma_1\gamma_2-{\frac{1}{2}}{\theta}^2-...[/tex]

and we find that:

[tex]r(\theta)=\cos(\theta)+\gamma_1\gamma_2\sin(\theta)[/tex]

which is similar to Joe's expression for exponentiating a bivector:

[tex]U = e^{\frac{1}{2} B} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)[/tex]

Even if in Joe's expression we have [itex](\frac{1}{2}\theta)[/itex] the two equation are similar because the rotor angle is always half the rotation...
 
Last edited:
  • #43
Sure Patrick, glad you're liking this thread.

By "the vectors and forms all anticommute with one another" I mean
[tex]
\underrightarrow{dx^i} \underrightarrow{dx^j} = - \underrightarrow{dx^j} \underrightarrow{dx^i}
[/tex]
which is the wedge product of two forms, without the wedge written. And
[tex]
\vec{\partial_i} \vec{\partial_j} = -
\vec{\partial_j} \vec{\partial_i}
[/tex]
which tangent vectors have to do for contraction with 2-forms to be consistent. And
[tex]
\vec{\partial_i} \underrightarrow{dx^j} = -
\underrightarrow{dx^j} \vec{\partial_i} = \delta_i^j
[/tex]
which is an anticommutation rule you can avoid if you always write vectors on the left, but otherwise is necessary for algebraic consistency.

1-form anticommutation is pretty standard, as is vector-form contraction -- often called the vector-form inner product. The vector anticommutation follows from that. And the vector-form anticommutation from that. (Though I haven't seen this done elsewhere.) It makes for a consistant algebra, but it's non-associative for many intermixed vectors and forms, so you need to use parenthesis to enclose the desired contracting elements.
 
Last edited:
  • #44
Mehdi_ said:
These vectors and forms all anti-commute with one another should means:
[tex]\vec{v}=v^i \vec{\partial_i}=-\vec{\partial_i}v^i[/tex]
[tex]\underrightarrow{f} = f_i \underrightarrow{dx^i}=-\underrightarrow{dx^i}f_i[/tex]

That means that order is important... it is a non-commutative algebra

Nope, the [itex]v^i[/itex] and [itex]f_i[/itex] are scalar coefficients -- they always commute with everything. (Err, unless they're Grassmann numbers, but we won't talk about that...)

Mehdi's other post was fine.
 
Last edited:
  • #45
Garrett...oups... that's true...
 
  • #46
garrett said:
Sure Patrick, glad you're liking this thread.

By "the vectors and forms all anticommute with one another" I mean
[tex]
\underrightarrow{dx^i} \underrightarrow{dx^j} = - \underrightarrow{dx^j} \underrightarrow{dx^i}
[/tex]
which is the wedge product of two forms, without the wedge written. And
[tex]
\vec{\partial_i} \vec{\partial_j} = -
\vec{\partial_j} \vec{\partial_i}
[/tex]
which tangent vectors have to do for contraction with 2-forms to be consistent. And
[tex]
\vec{\partial_i} \underrightarrow{dx^j} = -
\underrightarrow{dx^j} \vec{\partial_i} = \delta_i^j
[/tex]
which is an anticommutation rule you can avoid if you always write vectors on the left, but otherwise is necessary for algebraic consistency.

1-form anticommutation is pretty standard, as is vector-form contraction -- often called the vector-form inner product. The vector anticommutation follows from that. And the vector-form anticommutation from that. (Though I haven't seen this done elsewhere.) It makes for a consistant algebra, but it's non-associative for many intermixed vectors and forms, so you need to use parenthesis to enclose the desired contracting elements.
:eek: I had never realized that!

Thank you for explaining this!

For the product of 1-form that's not surprising to me since I would assume a wedge product there.

But a product of vector fields is always understood in differential geometry or is it an added structure? It seems to me that one couls also introduce a symmetric product.. What is the consistency condition that leads to this?

Also, I really did not know that "contracting" a one-form and a vector field depended on the order! I have always seen talk about "feeding a vector to a one-form" and getting a Kronecker delta but I alwyas assumed that one could equally well "feed" the one form to the vector and get the *same* result. I had not realized that there is an extra sign. What is the consistency condition that leads to this?

Sorry for all the questions but one thing that confuses me when learning stuff like this is to differentiate what is imposed as a definition and what follows from consistency. I always wonder if a result follows from the need for consistency between precedent results or if it's a new defnition imposed by hand. But I don't necessarily need to see the complete derivation, if I can only be told "this follows from this and that previous results",then I can work it out myself.

Thank you!
 
  • #47
Certainly. I need to stress this is my own notation, so it is perfectly reasonable to ask me to justify it. Also, it's entirely up to you whether you want to use it -- everything can be done equally well in conventional notation, after translation. ( I just have come to prefer mine. )

The conventional notation for the inner product ( a vector, [itex]\vec{v}[/itex], and form, [itex]f[/itex], contracted to give a scalar ) in Frankel and Nakahara etc. is
[tex]
i_{\vec{v}} f = f(\vec{v})
[/tex]
which I would write as
[tex]
\vec{v} \underrightarrow{f}
[/tex]
I will write the rest of this post using my notation, but you can always write the same thing with "i"'s all over the place and no arrows under forms.

Now, conventionally, there is a rule for the inner product of a vector with a 2-form. For two 1-forms, the distributive rule is
[tex]
\vec{a} \left( \underrightarrow{b} \underrightarrow{c} \right)
= \left( \vec{a} \underrightarrow{b} \right) \underrightarrow{c}
- \underrightarrow{b} \left( \vec{a} \underrightarrow{c} \right)
[/tex]
Using this rule, one gets, after multiplying it out:
[tex]
\vec{e} \vec{a} \left( \underrightarrow{b} \underrightarrow{c} \right)
= - \vec{a} \vec{e} \left( \underrightarrow{b} \underrightarrow{c} \right)
[/tex]
which is the basis for my assertion that
[tex]
\vec{e} \vec{a} = - \vec{a} \vec{e}
[/tex]
This sort of "tangent two vector" I like to think of as a loop, but that's just me being a physicist. ;)

So, now for the vector-form anti-commutation. Once again, keep in mind that you can do everything without ever contracting a vector from the right to a form -- this is just something I can do for fun. But, if you're going to do it, this expression should hold regardless of commutation or anti-commutation:
[tex]
\vec{a} \left( \underrightarrow{b} \underrightarrow{c} \right)
= \left( \underrightarrow{b} \underrightarrow{c} \right) \vec{a}
[/tex]
and, analogously with the original distribution rule, that should equal:
[tex]
= \underrightarrow{b} \left( \underrightarrow{c} \vec{a} \right)
- \left( \underrightarrow{b} \vec{a} \right) \underrightarrow{c}
[/tex]
Comparing that with the result of the original distribution rule shows that we must have
[tex]
\underrightarrow{b} \vec{a} = - \vec{a} \underrightarrow{b}
[/tex]
for all the equalities to hold true, since a vector contracted with a 1-form is a scalar and commutes with the remaining 1-form.

It won't hurt me if you don't like this notation. But do tell me if you actually see something wrong with it!
 
Last edited:
  • #48
garrett said:
By "the vectors and forms all anticommute with one another" I mean
[tex]
\underrightarrow{dx^i} \underrightarrow{dx^j} = - \underrightarrow{dx^j} \underrightarrow{dx^i}
[/tex]
which is the wedge product of two forms, without the wedge written. And
[tex]
\vec{\partial_i} \vec{\partial_j} = -
\vec{\partial_j} \vec{\partial_i}
[/tex]
which tangent vectors have to do for contraction with 2-forms to be consistent. And
[tex]
\vec{\partial_i} \underrightarrow{dx^j} = -
\underrightarrow{dx^j} \vec{\partial_i} = \delta_i^j
[/tex]
which is an anticommutation rule you can avoid if you always write vectors on the left, but otherwise is necessary for algebraic consistency.

Hi Garrett, I'm a bit confused about this notation. What kind of product are you using here, and are these really vectors? How can we make this notation compatible with the geometric product between vectors?

Oh, wait, I guess that you're just making the assumption that both the vector and the co-vector basis are orthogonal.

I'm reading that your [itex] \vec{\partial_i} [/itex] is a vector such that [itex] \vec{\partial_i} .\vec{\partial_j} = \delta_{ij} | \vec{\partial_i} |^2 [/itex]. Is that right?
 
Last edited:
  • #49
The algebra of vectors and forms at a manifold point, spanned by the coordinate basis elements [itex]\vec{\partial_i}[/itex] and [itex]\underrightarrow{dx^i}[/itex], are completely independent from the algebra of Clifford elements, spanned by [itex]\gamma_\alpha[/itex], or, if you like, they're independent of all Lie algebra elements. By the algebra being independent, I mean that all elements commute.

For example, when we calculated the derivative of a group element (to get the Killing fields), we were calculating the coefficients of a Lie algebra valued 1-form:
[tex]
\underrightarrow{d} g = \underrightarrow{dx^i} G_i{}^A T_A
[/tex]
The two sets of basis elements, [itex]\underrightarrow{dx^i}[/itex] and [itex]T_A[/itex], live in two separate algebras.

The vector and form elements don't have a dot product, and I will never associate one with them. Some do, and call this a metric, but things work much better if you work with Clifford algebra valued forms, and use a Clifford dot product.

I might as well describe how this works...
 
  • #50
The link to the wiki notes describing the frame and metric is:

http://deferentialgeometry.org/#frame metric

but I'll cut and paste the main bits here.

Physically, at every manifold point a frame encodes a map from tangent vectors to vectors in a rest frame. It is very useful to employ the Clifford basis vectors as the fundamental geometric basis vector elements of this rest frame. The ''frame'', then, is a map from the tangent bundle to the Clifford bundle -- a map from tangent vectors to Clifford vectors -- and written as
[tex]
\underrightarrow{e} = \underrightarrow{e^\alpha} \gamma_\alpha = \underrightarrow{dx^i} \left( e_i \right)^\alpha \gamma_\alpha
[/tex]
It is a Clifford vector valued 1-form. Using the frame, any tangent vector, $\vec{v}$, on the manifold may be mapped to its corresponding Clifford vector,
[tex]
\vec{v} \underrightarrow{e} = v^i \vec{\partial_i} \underrightarrow{dx^j} \left( e_j \right)^\alpha \gamma_\alpha = v^i \left( e_i \right)^\alpha \gamma_\alpha = v^\alpha \gamma_\alpha = v
[/tex]
This frame includes the geometric information usually attributed to a metric. Here, we can compute the scalar product of two tangent vectors at a manifold point using the frame and the Clifford dot product:
[tex]
\left( \vec{u} \underrightarrow{e} \right) \cdot \left( \vec{v} \underrightarrow{e} \right)
= u^\alpha \gamma_\alpha \cdot v^\beta \gamma_\beta
= u^\alpha v^\beta \eta_{\alpha \beta}
= u^i \left( e_i \right)^\alpha v^j \left( e_j \right)^\beta \eta_{\alpha \beta}
= u^i v^j g_{ij}
[/tex]
with the use of frame coefficients and the Minkowski metric replacing the use of a metric if desired. Using component indices, the ''metric matrix'' is
[tex]
g_{ij} = \left( e_i \right)^\alpha \left( e_j \right)^\beta \eta_{\alpha \beta}
[/tex]

Using Clifford valued forms is VERY powerful -- we can use them to describe every field and geometry in physics.
 
  • #51
garrett said:
The algebra of vectors and forms at a manifold point, spanned by the coordinate basis elements [itex]\vec{\partial_i}[/itex] and [itex]\underrightarrow{dx^i}[/itex], are completely independent from the algebra of Clifford elements, spanned by [itex]\gamma_\alpha[/itex], or, if you like, they're independent of all Lie algebra elements. By the algebra being independent, I mean that all elements commute.

Forget the lie algebra for the moment. I'm talking about the basis elements [itex] \partial_i \equiv \frac{d}{dx^i} [/itex] and their dual one-forms. In your notation you put an over arrow over the top indicating that we are dealing with a complete vector, i.e. [itex] e_i \equiv \vec{\partial_i} [/itex]. You then said that they obey an anti-commutation rule: [itex] e_i e_j = -e_j e_i [/itex].

So, my question was about the kind of product that you are using between these elements. In general the product of two vectors carries a symmetric and an antisymmetric part: [itex] e_i e_j = e_i \cdot e_j + e_i \wedge e_j [/itex], and it is only the antisymmetric part which anti-commutes. However if you are explicitly working in an orthonormal basis they what you say is correct, unless i=j in which case the two commute.
 
  • #52
garrett said:
The expression you calculated,
[tex]
g(x) = e^{x^i T_i} = \cos(r) + x^i T_i \frac{\sin(r)}{r}
[/tex]
is a perfectly valid element of SU(2) for all values of x. Go ahead and multiply it times its Hermitian conjugate and you'll get precisely 1.

Sure I get that, but the series expansions we use are only valid for small x, for instance substitute [itex]4\pi[/itex] into the series expansion and it doesn't work anymore...

Mehdi_ said:
It is related to the condition, [itex]{({x^1})^2 + ({x^2})^2+ ({x^3})^2}=1[/itex]

Whilst we're here, where does the condition come from? I thought that [itex] g g^- [/itex] might impose some condition on the x's, but it doesn't. Where does it come from? :)
 
  • #53
garrett said:
Because I missed that term! You're right, I thought those would all drop out, but they don't -- one of them does survive. ( By the way, becuase of the way I defined <> with a half in it, it's [itex] < T_i T_j T_k > = \epsilon_{ijk} [/itex] ) So, the correct expression for the inverse Killing vector field should be
[tex]
\xi^-_i{}^B = - < \left( (T_i - x^i) \frac{\sin(r)}{r} + x^i x^j T_j ( \frac{\cos(r)}{r^2} - \frac{\sin(r)}{r^3}) \right) \left( \cos(r) - x^k T_k \frac{\sin(r)}{r} \right) T_B >
[/tex]
[tex]
= \delta_{iB} \frac{\sin(r)\cos(r)}{r} + x^i x^B ( \frac{1}{r^2} - \frac{\sin(r)\cos(r)}{r^3} ) + \epsilon_{ikB} x^k \frac{\sin^2(r)}{r^2}
[/tex]

What happened to the [itex] x^i x^j x^k \epsilon_{jkB} (\cos(r)/r^2 - \sin^2(r)/r^4) [/itex] term?

p.s. it looks like the right-invariant vectors are just minus the left-invariant ones.
 
Last edited:
  • #54
garrett said:
[itex]
\underrightarrow{e} = \underrightarrow{e^\alpha} \gamma_\alpha = \underrightarrow{dx^i} \left( e_i \right)^\alpha \gamma_\alpha
[/itex]

What kind of object is [itex] e_\alpha ? [/itex], and what kind of object is [itex] \gamma_\alpha ? [/itex]
Are you using upper and lower arrows to purely signify differential geometry objects? Why not arrows on the gamma too; I take it that this is a vector (as apposed to a dual vector)?
 
Last edited:
  • #55
The [tex]e_\alpha[/tex] are the "legs" of the vierbien or frame; four orthonormal vectors based at a typical point of the manifold. I think the [tex]\gamma_\alpha[/tex] are just multipliers (bad choice of notation; they look too d*mn much like Dirac matrices).
 
  • #56
Taoy said:
I'm talking about the basis elements [itex] \partial_i \equiv \frac{d}{dx^i} [/itex] and their dual one-forms. In your notation you put an over arrow over the top indicating that we are dealing with a complete vector, i.e. [itex] e_i \equiv \vec{\partial_i} [/itex]. You then said that they obey an anti-commutation rule: [itex] e_i e_j = -e_j e_i [/itex].

So, my question was about the kind of product that you are using between these elements. In general the product of two vectors carries a symmetric and an antisymmetric part: [itex] e_i e_j = e_i \cdot e_j + e_i \wedge e_j [/itex], and it is only the antisymmetric part which anti-commutes. However if you are explicitly working in an orthonormal basis they what you say is correct, unless i=j in which case the two commute.

My justification for creating this algebra in which tangent vectors anti-commute is this: when you contract two tangent vectors with a 2-form, the sign changes depending on the order you do the contraction:
[tex]
\vec{a} \vec{b} \left( \underrightarrow{c} \underrightarrow{f} \right)
= - \vec{b} \vec{a} \left( \underrightarrow{c} \underrightarrow{f} \right)
[/tex]
This fact is standard differential geometry for the inner product of two tangent vectors with a 2-form. I merely elevate this fact to create an algebra out of it, and it motivates my notation. Since the two vectors are contracting with a 2-form, which is anti-symmetric, this "product" of two vectors is also necessarily anti-symmetric. If you like, you need not even consider it a product -- just two tangent vectors being fed to a 2-form in succession. :) That is the conventional interpretation.
 
  • #57
Taoy said:
Sure I get that, but the series expansions we use are only valid for small x, for instance substitute [itex]4\pi[/itex] into the series expansion and it doesn't work anymore...

I think the series expansion for the exponential is an exact equality as long as we keep all terms in the infinite series, which we do. I think it's correct for 4pi, though these x's should be periodic variables, inside the range 0 to 2pi.

Whilst we're here, where does the condition come from? I thought that [itex] g g^- [/itex] might impose some condition on the x's, but it doesn't. Where does it come from? :)

There is no restriction like that on the x coordinates -- best to forget he said that. (I believe he was making an analogy at the time.)
 
  • #58
Taoy said:
What kind of object is [itex] e_\alpha ? [/itex], and what kind of object is [itex] \gamma_\alpha ? [/itex]
Are you using upper and lower arrows to purely signify differential geometry objects? Why not arrows on the gamma too; I take it that this is a vector (as apposed to a dual vector)?

[tex]
\underrightarrow{e^\alpha} = \underrightarrow{dx^i} \left( e_i \right)^\alpha
[/tex]
is one of the orthonormal 1-form basis elements (indexed by [itex]\alpha[/itex]), dual to the corresponding member of the basis of orthonormal tangent vectors.
[tex]
\left( e_i \right)^\alpha
[/tex]
are the frame coefficients (aka vielbein coefficients).

[tex]
\gamma_\alpha
[/tex]
is one of the Clifford algebra basis vectors.

Yes, I put arrows over tangent vectors, arrows under forms, and no arrows under or over coefficients or Lie algebra or Clifford algebra elements such as [itex]\gamma_\alpha[/itex] . The number of arrows in an expression is "conserved" -- with upper arrows cancelling lower arrows, via vector-form contraction. If some object has a coordinate basis 1-form as part of it, which has an under arrow, then that object also gets an under arrow.
 
Last edited:
  • #59
Hi SA!

Have you looked around the new wiki yet? It was somewhat inspired by some comments we exchanged in another forum. :)

selfAdjoint said:
The [tex]e_\alpha[/tex] are the "legs" of the vierbien or frame; four orthonormal vectors based at a typical point of the manifold.

Yes, but I'm careful to distinguish the vierbein and inverse vierbein, using arrow decorations. The orthonormal basis vectors are
[tex]
\vec{e_\alpha} = \left(e^-_\alpha\right)^i \vec{\partial_i}
[/tex]
while the frame, or vierbein, 1-forms are
[tex]
\underrightarrow{e^\alpha} = \left(e_i\right)^\alpha \underrightarrow{dx^i}
[/tex]
They satisfy
[tex]
\vec{e_\alpha} \underrightarrow{e^\beta}
= \left(e^-_\alpha\right)^i \vec{\partial_i} \left(e_j\right)^\beta \underrightarrow{dx^j}
= \left(e^-_\alpha\right)^i \left(e_j\right)^\beta \delta_i^j
= \left(e^-_\alpha \right)^i \left(e_i\right)^\beta
= \delta_\beta^\alpha
[/tex]

I think the [tex]\gamma_\alpha[/tex] are just multipliers (bad choice of notation; they look too d*mn much like Dirac matrices).

It is a great choice of notation because they're Clifford vectors, which ARE represented by Dirac matrices. :) The same way SU(2) generators are represented by i times the Pauli matrices. You will do perfectly well thinking of [itex]\gamma_\alpha[/itex] as Dirac matrices if you like. (But one doesn't need to -- the same way one can talk about the su(2) Lie algebra without explicitly using Pauli matrices.)

Good to see you over here.
 
  • #60
Taoy said:
What happened to the [itex] x^i x^j x^k \epsilon_{jkB} (\cos(r)/r^2 - \sin^2(r)/r^4) [/itex] term?

It's zero.
[tex] x^j x^k \epsilon_{jkB} = 0 [/tex]

p.s. it looks like the right-invariant vectors are just minus the left-invariant ones.

I'll go check.
 
  • #61
Taoy said:
p.s. it looks like the right-invariant vectors are just minus the left-invariant ones.

Close, but that's not what I just got. Check all your signs.
( Or I'll have to wake up tomorrow morning and find out I need to check mine. ;)
 
  • #62
Originally Posted by Taoy
Whilst we're here, where does the condition [itex]{({x^1})^2 + ({x^2})^2+ ({x^3})^2}=1[/itex] come from?



In this forum we have succefully showed the double covering of SO(3) by SU(2).
SO(3) is the group of rotation in 3 dimentions.
But a rotation can be represented either by as orthogonal matrices with determinant 1 or by axis and rotation angle
or via the unit quaternions and the map 3-sphere to SO(3) or Euler angles.

Let's chose quaternions...

Every quaternion z = a + bi + cj + dk can be viewed as a sum a + u of a real number a
(called the “real part” of the quaternion) and a vector u = (b, c, d) = bi + cj + dk in [itex]R^{3} [/itex] (called the “imaginary part”).

Consider now the quaternions z with modulus 1. They form a multiplicative group, acting on [itex]R^{3} [/itex].

Such quaternion can be written [itex] z = \cos(\frac{1}{2} \alpha)+\sin(\frac{1}{2} \alpha)\xi[/itex]
which look like joe equation [itex]U = e^{\frac{1}{2} B} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)[/itex]

and [itex]\xi[/itex] being a normalized vector... Does Lie group generators normalized !?

Like any linear transformation, a rotation can always be represented by a matrix. Let R be a given rotation.
Since the group SO(3) is a subgroup of O(3), it is also Orthogonal.
This orthonormality condition can be expressed in the form

[tex]R^T R = I [/tex]

where [itex]R^T[/itex] denotes the transpose of R.


The subgroup of orthogonal matrices with determinant +1 is called the special orthogonal group SO(3).
Because for an orthogonal matrix R: [itex]det(R^T)=det R[/itex] which implies [itex](det R)^2=1[/itex] so that det R = +1 or -1.


But The group SU(2) is isomorphic to the group of quaternions of absolute value 1, and is thus diffeomorphic to the 3-sphere.
We have here a map from SU(2) onto the 3-phere (then parametrized the coordinates by means of angles
[itex]\theta[/itex] and [itex]\phi[/itex]) (spherical coordinates)

Actually unit quaternions and unit 3-phere S(3) described almost the same thing (isomorphism).


Because the set of unit quaternions is closed under multiplication, S(3) takes on the structure of a group.
Moreover, since quaternionic multiplication is smooth, S(3) can be regarded as a real Lie group.
It is a nonabelian, compact Lie group of dimension 3.

A pair of unit quaternions [itex]z_l[/itex] and [itex]z_r[/itex] can represent any rotation in 4D space.
Given a four dimensional vector v, and pretending that it is a quaternion, we can rotate the vector v like this:[itex]z_lvz_r[/itex]

By using a matrix representation of the quaternions, H, one obtains a matrix representation of S3.
One convenient choice is :


[tex] x^1 + x^2i + x^3j + x^4k = \left[\begin{array}{cc}x^1+i x^2 & x^3 + i x^4\\-x^3 + i x^4 &x^1-i x^2\end{array}\right][/tex]

which can be related of some sort !? to Garrett matrix...

[tex]T = x^i T_i = \left[\begin{array}{cc}i x^3 & i x^1 + x^2\\i x^1 - x^2 & -i x^3\end{array}\right][/tex]



Garrett, I have 2 question for you:
What is the website of your last publications (quaternions and others) ?
and
Since unit quaternions can be used to represent rotations in 3-dimensional space (up to sign),
we have a surjective homomorphism from SU(2) to the rotation group SO(3) whose kernel is { + I, − I}.
What does "whose kernel is { + I, − I}" mean ?
 
Last edited:
  • #63
selfAdjoint said:
The [tex]e_\alpha[/tex] are the "legs" of the vierbien or frame; four orthonormal vectors based at a typical point of the manifold. I think the [tex]\gamma_\alpha[/tex] are just multipliers (bad choice of notation; they look too d*mn much like Dirac matrices).

No, the [itex] \gamma_i [/itex]'s are actually clifford vectors. Interestingly in spaces with signatures (3,1) we'll see that these clifford gamma elements have an identical algebra to the Dirac matrices under the geometric product, which is probably why Garrett calls them gammas in the first place. (Hestenes uses this notation too).
 
  • #64
garrett said:
It's zero.
[tex] x^j x^k \epsilon_{jkB} = 0 [/tex]

I really must stop doing this late at night! (: Of course it's symmetric in the [itex]x[/itex]'s and antisymmetric in the [itex]\epsilon[/itex]! Doh!
 
  • #65
garrett said:
[tex]
\gamma_\alpha
[/tex]
is one of the Clifford algebra basis vectors.

Yes, I put arrows over tangent vectors, arrows under forms, and no arrows under or over coefficients or Lie algebra or Clifford algebra elements such as [itex]\gamma_\alpha[/itex] .

I thought that you wanted to keep elements of the vector space and of the dual space separate and distinct? The clifford algebra elements can be geometrically interpretted as a vector basis, and an arbitary vector expanded in them,

[tex] v = v^i \gamma_i = v_i \gamma^i [/tex]
where
[tex] \gamma^i . \gamma_j = \delta^{i}_{j} [/tex]

Are you less worried about preserving the distinction between [itex] \vec \gamma_i [/itex] and [itex] \underrightarrow{\gamma^i} [/itex] because of the presence of an implied metric?
 
Last edited:
  • #66
Taoy said:
I thought that you wanted to keep elements of the vector space and of the dual space separate and distinct? The clifford algebra elements can be geometrically interpretted as a vector basis, and an arbitary vector expanded in them,

[tex] v = v^i \gamma_i = v_i \gamma^i [/tex]
where
[tex] \gamma^i . \gamma_j = \delta^{i}_{j} [/tex]

Are you less worried about preserving the distinction between [itex] \vec \gamma_i [/itex] and [itex] \underrightarrow{\gamma^i} [/itex] because of the presence of an implied metric?

Yes, that's it exactly.

For any smooth manifold, you always have a tangent vector space at each point spanned by a set of coordinate basis vectors, [itex]\vec{\partial_i}[/itex]. It's also always natural to build the dual space to this one at each point, spanned by the coordinate basis 1-forms, [itex]\underrightarrow{dx^i}[/itex]. By definition, these satisfy
[tex]
\vec{\partial_i} \underrightarrow{dx^j} = \delta_i^j
[/tex]
which is an inner product between the two spaces. But there's no metric necessarily around. Mathematicians are smarter and lazier than I am, so they don't bother to write these little arrows like I do -- which I mostly write to remind me what the vector or form grade of a tangent space or cotangent space object is. They always just keep track of this in their heads.

OK, that's it for the two natural spaces (tangent vectors and forms) over any manifold. Now we introduce a third space -- a Clifford algebra. By definition, our Clifford algebra has a nice diagonal metric:
[tex]
\gamma_\alpha \cdot \gamma_\beta = \eta_{\alpha \beta}
[/tex]
This is the Minkowski metric when we work with spacetime. It doesn't really work to put any grade indicator over Clifford elements since it is often natural to add objects of different grade. Also, even though it sort of looks like there are two sets of Clifford basis vectors, [itex]\gamma_\alpha[/itex] and [itex]\gamma^\alpha[/itex], there is really only one set since
[tex]
\gamma_\alpha = \pm \gamma^\alpha
[/tex]

I use latin indices (i,j,k,...) for coordinates and the tangent and form basis, and greek indices ([itex]\alpha,\beta,...[/itex]) for Clifford algebra indices to further emphasize the distinction between the two spaces. This is identical to how we have separate coordinate indices and Lie algebra indices (A,B,...) floating around when working with groups.

Clifford algebra, you see, is the Lie algebra of physical space. :)
 
  • #67
I should also explicitly say that many geometric objects, like
[tex]
\underrightarrow{e} = \underrightarrow{dx^i} \left( e_i \right)^\alpha \gamma_\alpha
[/tex]
a Clifford valued 1-form, are valued in both the cotangent vector space AND the Clifford algebra space at a manifold point. In this way, the frame, [itex]\underrightarrow{e}[/itex], can provide a map from tangent vectors to Clifford algebra vectors.

Algebra valued forms, such as this one, were a favorite device of Cartan. And, as we've seen, they're useful in group theory as well as in GR.
 
  • #68
Originally Posted by Mehdi
Since unit quaternions can be used to represent rotations in 3-dimensional space (up to sign),
we have a surjective homomorphism from SU(2) to the rotation group SO(3) whose kernel is { + I, − I}.
What does "whose kernel is { + I, − I}" mean

In this case, kernel is { + I, − I} means that we have a double cover.
The group SO(3) has a double cover SU(2).

Could we then have a kind of quotient ring of this kind ?

[tex]\frac{ SU(2) }{(I,-I)} \simeq SO(3)[/tex]

the kernel { + I, − I} belong then to SU(2) ?
 
Last edited:
  • #69
The kernel of this map from SU(2) to SO(3) is equal to the set of elements of SU(2) that are mapped into the identity element of SO(3). So, yes, these are the elements 1 and -1 of SU(2).

Heh Mehdi, want to take a shot at calculating the Killing vector fields corresponging to the right action of the su(2) Lie generators? Joe almost got them right, but we haven't heard from him in awhile...
 
  • #70
OK for the killing vector field ... I can try...

What about :
[tex]\frac{ SU(2) }{(I,-I)} \simeq SO(3)[/tex]
is it true...
 

Similar threads

Back
Top