Group Theory Basics: Where Can I Learn More?

In summary, Group Theory is a branch of mathematics that studies the properties of groups and their operations. It has applications in many fields, such as physics, chemistry, and computer science. To learn more about Group Theory, one can refer to textbooks, online courses, and research papers. Additionally, universities or institutes may offer specialized courses on Group Theory. It is also beneficial to attend seminars, conferences, or workshops to gain a deeper understanding of this subject. Ultimately, practice and problem-solving are crucial to mastering Group Theory.
  • #71
Could you elaborate on what you mean by

it is irreducible if there is no part of the vectorspace left unstirred.

and what an invariant subspace is, please?
 
Physics news on Phys.org
  • #72
Originally posted by Lonewolf
Explaining? Only the exponential map. I can't seem to see how it relates to what it's supposed to...maybe that gets explained further along in the text than I am, or I'm just missing the point.

You have had a mathcourse where they said

exp(t) = 1 + t + t2/2! + ...(you can continue this)

If not you will be hurled from a high cliff.

Suppose instead of 1 one puts the n x n identity matrix

and instead of t one puts some n x n matrix A.

At some time in our history someone had this fiendishly clever idea, put a matrix into the series in place of a number. It will converge and give a matrix.

But here is an easy question for YOU Lonewolf.

What if A is a diagonal matrix with say 1/2 all the way down the diagonal

then what is exp (A)?

Dont be reluctant to ask things. Dont wait for it to be "covered later". Any of us may fail to give a coherent answer but ask.

But now I am asking you, can you calculate that nxn, well to be specific call it 3x3, matrix exp(A). Can you write it down.

What is the trace of A
What is the determinant of exp A

If I am poking at you a little it is because I am in the dark about what you know and don't know.
 
  • #73
We're supposed to think of a Lie Group as a group of transformations with various properties. One of the more interesting properties is that we can form "one-parameter families" that have the property that:

T0 x = x
Ts Tt x = Ts+t x

We can think of the parameter as being the "size" of the transformation. An example will probably make this clear.


Consider R2, and let Tθ be rotations around the origin through an angle of θ. Then, T0 is the identity transformation, and Tθ Tφ x = Tθ+φ x, so rotations form a one-parameter family when parametrized by the angle of rotation.


Since we have this continuous structure, it's natural to extend the ideas of calculus to Lie Groups. So, what if we consider an infinitessimal transformation Tdt in a one-parameter family?

Let's do an example using rotations in R2. Applying rotation Tθ can be expressed by premultiplying by the matrix:

Code:
/ cos θ -sin θ \
\ sin θ  cos θ /

So what if we plug in an infinitessimal parameter? We get

Code:
/ cos dθ -sin dθ \ = / 1  -dθ \
\ sin dθ  cos dθ /   \ dθ  1  /

 = / 1 0 \ + / 0 -1 \ * dθ
   \ 0 1 / + \ 1  0 /

So the infintessimal rotations are simply infinitessimal translations. This is true in general; we can make locally linear approximations to transformations just like ordinary real functions, such as:

f(x + dx) = f(x) + f'(x) dx

We call the algebra of infinitessimal transformations a Lie Algebra.


The interesting question is how to go the other way. What if we had the matrix

Code:
/ 0 -1 \
\ 1  0 /

and we wanted to go the other way to discover this is the derivative of a family of transformations?

Well, integration won't work, so let's take a different approach; let's repeatedly apply our linear approximation. If X is our element from the lie algebra, then (1 + t X) is approximately the transformation we seek Tt. We can improve our approximation by applying the approximation twice, but each time half as long:

(1 + (t/2) X)2

And in general we can break it up into n legs:

(1 + (t/n) X)n

So then we might suppose that:

Tt = limn->∞ (1 + (tX/n))n

And just like in the ordinary case, this limit evaluates to:

Tt = et X

That's from where the exponential map comes!

You can then verify that the derivitive of Tt at 0 is indeed t X


To summarize, we exponentiate elements of the Lie Algebra (iow apply an infinitessimal transformation an infinite number of times) to yield an elements of the Lie Group.



edit: fixed some hanging formatting tags
 
Last edited:
  • #74
my browser draws a blank sometimes and shows boxes so I am
experimenting with typography a bit here. Nice post.
I don't seem able to get the theta to show up inside a "code" area. All I get is a box.

Well that is all right. I can read the box as a theta OK
Strange that theta shows up outside "code" area but not
inside

That is a nice from-first-principles way to introduce the
exponential of matrices.

Can you show

det exp(A) = exp (trace A)

in a similarly down-to-earth way?

I see it easily for diagonal matrices but when I thought about it I had to imagine putting the matrix in a triangular form
 
Last edited:
  • #75
YO! LONEWOLF You are about to see sl(2, C)

Lonewolf your job is to react when people explain something in a way you can understand. stamp feet. make hubub of some kind

You are about to see an example of a Lie algebra.

Hurkyl is about to show you what the L.A. is that belongs to the group of DET = 1 matrices for example SL(2, C).
The L.A. for SL(2,C) is written with lowercase as sl(2, C)

The L.G. of matrices with det = 1 is made by exponential map exp(A) from TRACE ZERO matrices A.

because exp(0) = 1.

So if Hurkyl takes one more step he can characterize the L.A.
of the group of det = 1 matrices.

Actually of any size and over the reals as well as the complexes I think. But just to be specific think of 2x2 matrices.

Lonewolf, do you understand this. Do you like it. I think it is terrific, like sailing on a windy day. L.G. and L.A. are really neat.

Well probably it is 4 AM in the morning in the UK so you cannot answer.
 
Last edited:
  • #76
If not you will be hurled from a high cliff.

I guess you don't have to bother coming over here and finding a high cliff then. :wink:

then what is exp (A)?

exp(A) =
Code:
(e[sup]1/2[/sup] 0 0)
(0 e[sup]1/2[/sup] 0)
(0 0 e[sup]1/2[/sup])

What is the trace of A

trace(A) = sum of diagonal entries = a11 + a22 + a33

What is the determinant of exp A

det[exp(A)] = trace(A)
 
  • #77
And in general we can break it up into n legs:

(1 + (t/n) X)n

This is pretty much when the penny dropped.

because exp(0) = 1.

This makes sense as well, and I can see where the exponential map is used now. Thanks.
 
  • #78
det[exp(A)] = trace(A)
Very good! Except I think you mean:

det[exp(A)] = exp[trace(A)]

Probably a typo..

- Warren
 
  • #79
Oops, yeah. I probably should learn to read my posts...
 
  • #80
Well that is all right. I can read the box as a theta OK
Strange that theta shows up outside "code" area but not
inside

You're having font issues then. Your default font does indeed have the theta symbol, but the font your browser uses for the code blocks does not have a theta symbol (and replaces it with a box).


This is pretty much when the penny dropped.

Eep! I've never heard that phrase before, is that good or bad?


Can you show

det exp(A) = exp (trace A)

in a similarly down-to-earth way?

Nope. The only ways I know to show it are to diagonalize or to use the same limit approximation as above and the approximation:

det(I + A dt) = 1 + tr(A) dt

which you can verify by noting that all of the off diagonal entries are nearly zero, so the only important contribution is the product of the diagonal entries.


I think there's a really slick "down-to-earth" proof as well. I know the determinant is a measure of how much a transformation scales hypervolumes. (e.g. if the determinant of a 2x2 matrix near a point is 4, then applying the matrix will multiply the areas of figures near that point by 4) I know there's a nice geometrical interpretation of the trace, but I don't remember what it is.
 
  • #81
Originally posted by Hurkyl
Nope. The only ways I know to show it are to diagonalize or to use the same limit approximation as above and the approximation:

det(I + A dt) = 1 + tr(A) dt

which you can verify by noting that all of the off diagonal entries are nearly zero, so the only important contribution is the product of the diagonal entries.

All that shows is that the formula holds to good approximation for matrices with elements that are all much less than one.

One correct proof goes as follows:

For any matrix A, there is always a matrix C such that CAC-1 is upper triangular meaning that all elements below the diagonal vanish. The key properties needed for the proof are that the space of upper triangular matrices are closed under matrix multiplication, and their determinants are the product of the elements on their diagonals. The only other thing we use is the invariance of the trace under cyclic permutations of it's arguments so that Tr(CAC-1) = TrA. The proof follows trivially.
 
Last edited:
  • #82
The proof to which I was alluding is:

det(eA) = det(limn->∞(I + A/n)n)
= limn->∞ det((I + A/n)n)
= limn->∞ (det(I + A/n))n
= limn->∞ (1 + tr(A) / n + O(1 / n2))n
= etr(A)
 
Last edited:
  • #83
another proof if you know some topology: diagonalizable matrices are dense in GL(n).
 
  • #84
Eep! I've never heard that phrase before, is that good or bad?

It's a good thing. We use it over here to mean the point where somebody realizes something. Sorry about that, I thought it was in wider use than it is.
 
  • #85
Originally posted by Lonewolf
It's a good thing. We use it over here to mean the point where somebody realizes something. Sorry about that, I thought it was in wider use than it is.

I always assumed it was like the coin dropping in a payphone.
Maybe going back to old times when cooking gas was metered
out by coin-operated devices---the penny had to drop for something to turn on.

I have lost track of this thread so much has happened.

Just to review something:
A skewsymmetric means AT = - A
and a skewsymmetric matrix must be zero down the diagonal
so its trace is clearly zero, and another definition:
B orthogonal means BT = B-1

Can you prove that if
A is a skew symmetric matrix then exp(A) is orthogonal and
has det = 1?
I assume you can. It characterizes the Lie algebra "so(3)" that goes with the group SO(3). You may have noticed that they use lowercase "what(...)" to stand for the Lie algebra that goes with the Lie group "WHAT(...)"

Excuse if this is a repeat of something I or someone else said earlier.
 
  • #86
SO(3) is defined to be the space of all 3x3 real matrices G such that:

Gt = G-1
det G = 1

So what about its corresponding Lie Algebra so(3)? It is the set of all 3x3 matrices A such that exp(A) is in SO(3).

So how do the constraints on SO(3) translate to constraints on so(3)?

The second condition is easy. If A is in so(3), then:

exp(tr A) = det exp(A) = 1

so tr A must be zero. Conversely, for any matrix A with tr A zero, the second condition will be satisfied.


The first one is conceptually just as simple, but technically trickier. Translated into so(3) it requires:

exp(A)t = exp(A)-1
exp(At) = exp(-A)
*** this step to be explained ***
At = -A

Therefore if A is in so(3) then A must be skew symmetric. And conversely, it is easy to go the other way to see that any skew symmetric matrix A satisfies the first condition.

Therefore, so(3) is precisely the set of 3x3 traceless skew symmetric matrices.


I skipped over a technical detail in the short proof above. If exponents are real numbers then the marked step is easy to justify by taking the logarithm of both sides... however logarithms are only so nice when we're working with real numbers! I left that step in my reasoning because you need it when working backwards.

The way to prove it going forwards is to consider:

exp(s At) = exp(-s A)

If A is in so(3), then this must be true for every s, because so(3) forms a real vector space. Now, we differentiate with respect to s to yield:

(At) exp(s At) = (-A) exp(-s A)

Which again must be true for all s. Now, plug in s = 0 to yield:

At = -A

This trick is a handy replacement for taking logarithms!


Anyways, we've proven now that so(3) is precisely all 3x3 real traceless skew symmetric matrices. In fact, we can drop "traceless" because real skew symmetric matrices must be traceless.

For matrix algebras we usually define the lie bracket as being the commutator:

[A, B] = AB - BA

I will now do something interesting (to me, anyways); I will prove that so(3) is isomorphic (as a Lie Algebra) to R3 where the lie bracket is the vector cross product!


The first thing to do is find a (vector space) basis for so(3) over R. The most general 3x3 skew symmetric matrix is:

Code:
/  0  a -b \
| -a  0  c |
\  b -c  0 /

Where a, b, and c are any real number. This leads to a natural choice of basis:

Code:
    /  0  0  0 \
A = |  0  0 -1 |
    \  0  1  0 /

    /  0  0  1 \
B = |  0  0  0 |
    \ -1  0  0 /

    /  0 -1  0 \
C = |  1  0  0 |
    \  0  0  0 /

As an exercise for the reader, you can compute that:
AB - BA = C
BC - CB = A
CA - AC = B

So now I propose the following isomorphism &phi from so(3) to R3:

φ(A) = i
φ(B) = j
φ(C) = k

And this, of course, extends by linearity:

φ(aA + bB + cC) = ai + bj + ck


So now let's verify that this is actually an isomorphism:

First, the vector space structure is preserved; &phi is a linear map, and it takes a basis of the three dimensional real vector space so(3) onto a basis of the three dimensional real vector space R3, so φ must be a vector space isomorphism.

The only remaining thing to consider is whether &phi preserves lie brackets. We can do so by considering the action on all pairs of basis elements (since the lie bracket is bilinear)

φ([A, A]) = &phi(AA - AA) = φ(0) = 0 = i * i = [i, i] = [φ(A), φ(A)]
(and similarly for [B, B] and [C, C])
φ([A, B]) = φ(AB - BA) = φ(C) = k = i * j = [i, j] = [φ(A), φ(B)]
(and similarly for other mixed pairs)

So we have verified that so(3) and (R3, *) are isomorphic as Lie Algebras! If we so desired, we could then choose (R3, *) as the Lie Algebra associated with SO(3), and define the exponentional map as:

exp(v) = exp(φ-1(v))

So, for example:

exp(tk) = rotation of t radians in the x-y plane
 
Last edited:
  • #87
Hurkyl:

Great post!

- Warren
 
  • #88
Originally posted by chroot
Hurkyl:

Great post!

- Warren

I agree!
 
  • #89
Bah, there are no blushing emoticons!

Thanks guys!


I'm not entirely sure where to go from here, though, since I'm learning it with the rest of you! (so if any of you have things to post, or suggestions on which way we should be studing, feel free to say something! :smile:) But I did talk to one of my coworkers and got a three hour introductory lecture on Lie Groups / Algebras in various contexts, and I think going down the differential geometry route would be productive (and it allows us to keep the representation theory in the representation theory thread!)... I think we are almost at the point where we can derive Maxwellean Electrodynamics as a U(1) gauge theory (which will motivate some differential geometry notions in the process), but I wanted to work out most of the details before introducing that.

Anyways, my coworker did suggest some things to do in the meanwhile; we should finish deriving the Lie algebras for the other standard Lie groups, such as su(2), sl(n; C), so(3, 1)... so I assign that as a homework problem for you guys to do in this thread! :smile:
 
  • #90
I think we are almost at the point where we can derive Maxwellean Electrodynamics as a U(1) gauge theory

As a what now?
 
  • #91
More talking with him indicates he may have been simplifying quite a bit when he brought up Maxwell EM. I'll let someone else explain what "gauge theory" means in general; I'm presuming I'll understand the ramifications after I work through the EM exercise, but I haven't done that yet. :smile:
 
  • #92
Just to help motivate the thread, I'll find su(n).

[size=large]Lie algebra of U(n)[/size]

First, as a reminder, we know that U(n) is the unitary group of n x n matrices. You should program the word 'unitary' into your head so it reminds you of these conditions:

1) Multiplication by unitary matrices preserves the complex inner product: <Ax, Ay> = <x, y> = [sum]i xi* yi, where A is any member of U(n), x and y are any complex vectors, and * connotes complex conjugation.

2) A* = A-1

3) A* A = I

4) |det A| = 1

Now, to find u(n), the Lie algebra of the Lie group U(n), I'm going to follow Brian Hall's work on page 43 of http://arxiv.org/math-ph/0005032

Recall that we can represent any1 member of a matrix Lie group G by an exponentiation of a member of its Lie algebra g. In other words, for all U in U(n), there is a u in u(n) such that:

exp(tu) = U

where exp is the exponential mapping defined above. Thus exp(tu) is a member of U(n) when u is a member of u(n), and t is any real number.

Now, given that U* = U-1 for member of U(n), we can assert that

(exp(tu))* = (exp(tu))-1

Both sides of this equation can be simplified. The left side's conjugation operator can be shown to "fall through" the exponential, and the left side is equivalent to exp(tu*). Similarly, the -1 on the right side falls through, and the right side is equivalent to exp(-tu). (Exercise: it's easy and educational to show that the * and -1 work this way.) We thus have a simple relation:

exp(tu*) = exp(-tu)

As Hall says, if you differentiate this expression with respect to t at t=0, you immediately arrive at the conclusion that

u* = -u

Matrices which have this quality are called "anti-Hermitian." (the "anti" comes from the minus sign.) The set of n x n matrices {u} such that u* = -u is the Lie algebra of U(n).

Now how about su(n)?

[size=large]Lie algebra of SU(n)[/size]

SU(n) is a subgroup of U(n) such that all its members have determinant 1. How does this affect the Lie algebra su(n)?

We only need to invoke one fact, which has been proven above. The fact is:

det(exp(X)) = exp(trace(X))

If X is a member of a Lie algebra, exp(X) is a member of the corresponding Lie group. The determinant of the group member must be the same as e raised to the trace of the Lie algebra member.

In this case, we know that all of the members of SU(n) have det 1, which means that exp(trace(X)) must be 1, which means trace(X) must be zero!

You can probably see now how su(n) must be. Like u(n), su(n) is the set of n x n anti-Hermitian matrices -- but with one additional stipulation: members of su(n) are also traceless.

1You can't represent all group members this way in some groups, as has been pointed out -- but it's true for all the groups studied here.

- Warren

edit: A few very amateurish mistakes. Thanks, lethe, for your help.
 
Last edited by a moderator:
  • #93
The weather's been pretty hot and chroot's derivation of su(n) is really neat and clear so I'm thinking I will just be shamelessly lazy and quote Warren with modifications to get sl(n, C).

I see that he goes along with Brian Hall and others in using lower case to stand for the Lie Algebra of a group written in upper case. So su(n) is the L.A. that belongs to SU(n).

In accord with that notation, sl(n,C) is the L.A. that goes with the group SL(n,C), which is just the n x n complex matrices with det = 1. Unless I am overlooking something, all I have to do is just a trivial change in what Warren already did:

Originally posted by chroot, with minor change for SL(n, C)


[size=large]Lie algebra of SL(n, C)[/size]

SL(n, C) is a subgroup of GL(n, C) such that all its members have determinant 1. How does this affect the Lie algebra sl(n, C)?

We only need to invoke one fact, which has been proven above. The fact is:

det(exp(X)) = exp(trace(X))

If X is a member of a Lie algebra, exp(X) is a member of the corresponding Lie group. The determinant of the group member must be the same as e raised to the trace of the Lie algebra member.

In this case, we know that all of the members of SL(n, C) have det 1, which means that exp(trace(X)) must be 1, which means trace(X) must be zero!
...sl(n, C) is the set of n x n complex matrices but with one additional stipulation: members of sl(n, C) are...traceless.

That didnt seem like any work at all. Even in this heat-wave.
Hurkyl said to give the L.A. of SO(3,1) so maybe i should do that to so as not to look like a slacker. Really like the clarity of both Hurkyl and Chroot style.

I guess Lethe must have raised the "topologically connected" issue. For a rough and ready treatment, I feel like glossing over manifolds and that but it is nice to picture how the det = 0 "surface" slices the GL group into two chunks...

Because "det = 0" matrices, being non-invertible, are not in the group!


...so that only those with det > 0 are in the "connected component of the identity". The one-dimensional subgroups generated by elements of the L.A. are like curves radiating from the identity and they cannot leap the "det = 0" chasm and reach the negative determinant chunk.

Now that I think of it, Lethe is here and he might step in and do SO(3,1) before I attend to it!
 
Last edited:
  • #94
Hurkyl has a notion of where to go. I want to follow the hints taking shape here:
***********
...But I did talk to one of my coworkers and got a three hour introductory lecture on Lie Groups / Algebras in various contexts, and I think going down the differential geometry route would be productive (and it allows us to keep the representation theory in the representation theory thread!)... I think we are almost at the point where we can derive Maxwellean Electrodynamics as a U(1) gauge theory (which will motivate some differential geometry notions in the process), but I wanted to work out most of the details before introducing that.

Anyways, my coworker did suggest some things to do in the meanwhile; we should finish deriving the Lie algebras for the other standard Lie groups, such as SU(2), SL(n; C), SO(3, 1)... so I assign that as a homework problem for you guys to do in this thread!
***********
the suggestion is----discuss SO(3,1) and so(3,1). Then back to Hurkyl for an idea about the next step. Let's go with that.

Originally posted by chroot, changed to be about SO(3,1)


[size=large]Lie algebra of SO(3,1)[/size]

SO(3,1) is just the group of Special Relativity that gets you to the moving observer's coordinates---it contains 4x4 real matrices that preserve a special "metric" dx2 + dy2 + dz2 - dt2

to keep the space and time units the same, distance is measured in light-seconds----or anyway time and distance units are made compatible so that c = 1 and I don't have to write ct everywhere and can just write t.

This "metric" is great because light-like vectors have norm zero. So the definition that a matrix in this group takes any vector to one of the same norm means that light-like stays light-like!

All observers, even those in relative motion, agree about what is light-like---the world line of something going that speed. (Another way of saying the grandfather axiom of SR that all agree about the speed of light.)

the (3,1) indicates the 3 plus signs followed by the 1 minus sign in the "metric".

So we implement the grand old axiom of SR by having this special INNER PRODUCT* in our 4D vector

1) Multiplication by SO(3,1) matrices preserves the special inner product: <Ax, Ay> = <x, y> = [sum]*i xiyi, where A is any member of SO(3,1), x and y are any real 4D vectors, and * is a reminder that the last term in the sum gets a minus sign.

2) This asterisk notation is a bit clumsy and what Brian Hall does instead is define a matrix g which is diag(1,1,1,-1).

g looks like the 4 x 4 identity except for one minus sign
BTW notice that g-1 = g
and also that gt = g

and he expresses the condition 1) by saying

At g A = g

[[[[to think about...express <x,y> as xtg y
express <Ax, Ay> as xt At g A y]]]]

3) Then he manipulates 2) to give
g-1 At g = A-1

...multiply both sides of 2) on the left by g-1
to give
g-1 At g A= I

then multiply both sides on the right by A-1...

4) then---ahhhh! the exponential map at last----he writes 3) using a matrix A = expX, and solves for a condition on X


g-1 At g = g-1 exp(Xt) g = exp(g-1 Xt g ) = exp(- X) = A-1

the only way this will happen is if X satisfies the condition
g-1 Xt g = -X

it is something like what we saw before with SO(n) except gussied up with g, so it is not a plain transpose or a simple skew symmetric condition. also the condition is the same as

g Xt g = -X

because g is equal to its inverse.
Better post this and proofread later.

.
BTW multiplying by g on right and left like that does not change trace, so as an additional check

trace(X) = trace(g Xt g) = trace( -X) = - trace (X)

showing that trace (X) = 0

so now we know what matrices comprise so(3,1)

they are the ones that satisfy

g Xt g = -X
 
Last edited:
  • #95
Not sure how relevant this is to where the thread is going, but I didn’t want people to think I’d given up on it.

The Heisenberg Group

The set of all upper triangular 3x3 matrices with determinant 1 coupled with matrix multiplication forms a group known as the Heisenberg Group, which will be denoted H. The matrices A in H are of the form

Code:
(1 a b)
(0 1 c)
(0 0 1)

where a,b,c are real numbers.

If A is in the form above, the inverse of A can be computed directly to be

Code:
(1 -a ac-b)
(0  1 -c    )
(0  0  1    )

H is thus a subgroup of GL(3:R)

The limit of all matrices in the form of A is again in the form of A. (This bit wasn’t as clear to me as the text indicated. Can someone help?)

The Lie Algebra of the Heisenberg Group

Consider a matrix X such that X is of the form

Code:
(0  d  e)
(0  0  f)
(0  0  0)

then exp(X) is a member of H.

If W is any matrix such that exp(tW) is of the form of matrix A, then all of the entries of W=d(exp(tW))/dt at t=0 which are on or below the diagonal must be 0, so W is of the form X.

Apologies for the possible lack of clarity. I kinda rushed it.
 
  • #96
I don't think I'll have time over the next week or so to prepare anything, so it'd be great if someone else can introduce something (or pose some questions) for a little while!
 
  • #97
Originally posted by Hurkyl
I don't think I'll have time over the next week or so to prepare anything, so it'd be great if someone else can introduce something (or pose some questions) for a little while!

Hey Warren, any ideas?
Maybe we should hunker down and wait till
Hurkyl gets back because he seemed to give the
thread some direction. But on the other hand
we don't want to depend on his initiative to the
point that it is a burden! What should we do?

I am thinking about the Lorentz group, or that thing SO(3,1)
I discussed briefly a few days ago.
Lonewolf is our only audience. (in part a fiction, but one must
imagine some listener or reader)
Maybe we should show him explicit forms of matrices implementing the Lorentz
and Poincare groups.

It could be messy but on the other hand these are so
basic to relal speciativity. Do we not owe it to ourselves
to investigate them?

Any particular interests or thoughts about what to do?
 
Last edited:
  • #98
Lie algebra of Lorentz group

If we were Trekies we might call it "the Spock algebra of the Klingon group" or if we were on firstname basis with Sophus Lie and Hendrik Lorentz we would be talking about
"the Sophus algebra of the Hendrik group"
such solemn name droppers... Cant avoid it.

Anyway I just did some scribbling and here it is. Pick any 6 numbers a,b,c,d,e, f
This is a generic matrix in the Lie algebra of SO(3;1):

Code:
0   a  b  c
-a  0  d  e
-b -d  0  f
c   e  f  0

what I did was take a line from preceding post (also copied below)
g-1 Xt g = -X

remember that g is a special diagonal matrix diag(1,1,1,-1)

and multiply on both sides by g to get
Xt g = -gX

that says that X transpose with ritemost colum negged
equals -1 times the original X with its bottom row negged.

This should be really easy to see so I want to make it that way.
Is this enough explanation for our reader? Probably it is.

But if not, let's look at the original X with its bottom row negged


Code:
0   a  b  c
-a  0  d  e
-b -d  0  f
-c -e  -f  0

And let's look at the transpose with its ritemost column negged


Code:
0  -a  -b  -c
a   0  -d  -e
b   d   0  -f
c   e   f   0

And just inspect to see if the first is -1 times the second.
It does seem to be the case.

Multiplying by g on left or right does things either to the
bottom row or the rightmost column, I should have said at the beginning---and otherwise doesn't change the matrix.

Ahah! I see that what I have just done is a homework problem in Brian hall's book. It is exercise #7 on page 51, "write out explicitly the general form of a 4x4 real matrix in so(3;1)



Originally a chroot post but changed to be about SO(3;1)


[size=large]Lie algebra of SO(3;1)[/size]

SO(3;1) is just the group of Special Relativity that gets you to the moving observer's coordinates---it contains 4x4 real matrices that preserve a special "metric" dx2 + dy2 + dz2 - dt2

to keep the space and time units the same, distance is measured in light-seconds----or anyway time and distance units are made compatible so that c = 1 and I don't have to write ct everywhere and can just write t.

1) Multiplication by SO(3;1) matrices preserves the special inner product: <Ax, Ay> = <x, y> = [sum]*i xiyi, where A is any member of SO(3,1), x and y are any real 4D vectors, and * is a reminder that the last term in the sum gets a minus sign.

2) This asterisk notation is a bit clumsy and what Brian Hall does instead is define a matrix g which is diag(1,1,1,-1).

g looks like the 4 x 4 identity except for one minus sign
BTW notice that g-1 = g
and also that gt = g

and he expresses the condition 1) by saying

At g A = g

3) Then he manipulates 2) to give
g-1 At g = A-1

4) then---ahhhh! the exponential map at last----he writes 3) using a matrix A = expX, and solves for a condition on X

g-1 At g = g-1 exp(Xt) g = exp(g-1 Xt g ) = exp(- X) = A-1

the only way this will happen is if X satisfies the condition
g-1 Xt g = -X

it is something like what we saw before with SO(n) except gussied up with g, so it is not a plain transpose or a simple skew symmetric condition. also the condition is the same as

g Xt g = -X

 
Last edited:
  • #99
I've been trying to devise a good way to introduce differential manifolds...

(by that I mean that I hate the definition to which I was introduced and I was looking for something that made more intuitive sense!)

I think I have a way to go about it, but it dawned on me that I might be spending a lot of effort over nothing, I should have asked if everyone invovled is comfortable with terms like "differentiable manifold" and "tangent bundle".
 
  • #100
Originally posted by Hurkyl
I've been trying to devise a good way to introduce differential manifolds...

(by that I mean that I hate the definition to which I was introduced and I was looking for something that made more intuitive sense!)

I think I have a way to go about it, but it dawned on me that I might be spending a lot of effort over nothing, I should have asked if everyone invovled is comfortable with terms like "differentiable manifold" and "tangent bundle".

I like Marsden's chapter 4 very much
"Manifolds, Vector Fields, and Differential Forms"
pp 121-145 in his book----25 pages
His chapter 9 covers Lie groups and algebras, not too
differently from Brian Hall that we have been using.
So Marsden is describing only the essentials.
I will get the link so you can see if you like it.

Lonewolf and I started reading Marsden's chapter 9 before
we realized Brian Hall was even better. So at least two of us
have some acquaintance with the Marsden book.

We could just ask if anybody had any questions about
Marsden chapter 4----those 25 pages----and if not simply
move on.

On the other hand if you have thought up a better way
to present differential geometry and want listeners, go for it!
Here is the url for Marsden.

http://www.cds.caltech.edu/~marsden/bib_src/ms/Book/
 
Last edited by a moderator:
  • #101
H., I had another look at Marsden.
His chapter 9 is too hard and the book as a whole is
too hard. It is a graduate textbook.
But maybe his short chapter 4 on manifolds, vector
fields and differential forms is not too hard.
a short basic summary. It seems to me OK.
If you agree then perhaps this is a solution.
We don't have to give the definitions because
they are all summarized for us.

We should proceed only where it will give us pleasure,
and at our own pace, being under no obligation to anyone. If Lonewolf is still around we can provide whatever explanations
he asks for so he can keep up with the party. If we decide
it is time to stop we will stop (substantial ground has already
been covered). I shall be happy with whatever you decide.

I am interested to know if there are any matrix group, lie group,
lie algebra, repr. theory topics that you would like to hit.
E.g. sections or chapters of Brian Hall (or propose some other online text).

I am currently struggling to understand a little about spin foams
but can find no direct connection there to this thread.
Baez has a introductory paper gr-qc/9905087
 
  • #102
I've been thinking more about my idea of trying to derive Maxwell's equations from the geometry of M4*U(1) (M4=Minowski space)... the way the idea was presented to me, I got the impression it would be an interesting application of lie groups requiring just a minimal amount of differential geometry... but as I've been mulling over what we'd have to do to get there I'm thinking it might actually be an interesting application of differential geometry requiring just a minimal amount of lie groups. :frown:

So basically, I don't know where to go from here!


The way I usually like to learn is to delve a little bit into a subject, then figure out a (possibly almost trivial) concrete example of how the subject can be used to describe "real world" things, and then continue studying deeper into the subject. The problem is I just don't know what "real world" thing we can get to early on. I guess the solution is to just delve deeper into the math before looking back at the real world.
 
Last edited:
  • #103
Originally posted by Hurkyl

The way I usually like to learn is to delve a little bit into a subject, then figure out a (possibly almost trivial) concrete example of how the subject can be used to describe "real world" things, and then continue studying deeper into the subject. The problem is I just don't know what "real world" thing we can get to early on. I guess the solution is to just delve deeper into the math before looking back at the real world.

I just happened onto a 3 page online account of
"Representation Theory of SL(2,C)"

It is an appendix in an 8 page paper by Perez Rovelli
"Spin Foam Model for Lorentzian General Relativity"

They lifted it from W. Ruhl (1970) "The Lorentz Group and Harmonic Analysis" and some other classical sources like that.

Baez also reviews SL(2,C) rep theory on page 4 of what I think is a great paper he wrote with Barrett, gr-qc/0101107.
That paper Baez and Barrett "Integrability for Relativistic Spin
Networks" is 22 pages but there is already a good bit of grist for the mill in just the first 4 or 5 pages.

If you have other directions in mind, drop a few hints and I will try to come up with source material.

Oh! we had better not forget to go over the irreps of SU(2)
Do you happen to have an online source? That is easier.
What was I thinking of! irreps of SU(2) naturally come well
before one tries SL(2,C).

think of something nice and simple, my brain is fried from spin foams and 10j symbols
 
  • #104
I'm still around. Can someone explain tangent bundles, please. Marsden defines them as the disjoint union of tangent vectors to a manifold M at the points m in M. Am I right in thinking that this gives us a set containing every tangent vector to the manifold, or did I miss something?
 
  • #105
Originally posted by Lonewolf
I'm still around. Can someone explain tangent bundles, please. Marsden defines them as the disjoint union of tangent vectors to a manifold M at the points m in M. Am I right in thinking that this gives us a set containing every tangent vector to the manifold, or did I miss something?

that s almost right. the tangent bundle is every tangent vector at any point of the manifold, along with the manifold itself.

the tangent bundle is itself given the structure of a manifold.
 
Last edited:

Similar threads

Replies
3
Views
2K
Replies
7
Views
2K
Replies
10
Views
2K
Replies
52
Views
6K
Replies
17
Views
5K
Back
Top