Frobenius theorem applied to frame fields

  • #1
cianfa72
2,352
253
TL;DR Summary
Frobenius's theorem applied to frame fields
Frobenius's theorem gives necessary and sufficient conditions for smooth distributions ##\mathcal D## defined on a ##n##-dimensional smooth manifold to be completely integrable. Now consider a smooth frame field given by ##n## linearly independent smooth vector fields.

I suppose Frobenius's theorem in that case always holds true. In particular the Lie bracket ##[X,Y]## of each pair of frame field vectors ##X,Y## trivially lies in the span of frame field vectors at each point.

Is the above correct? Thanks.
 
Last edited:
Physics news on Phys.org
  • #2
cianfa72 said:
I suppose Frobenius's theorem in that case actually reduces to the condition that frame field vectors must commute, i.e. for each pair of frame field vectors ##X,Y## it must be ##[X,Y]=0##.

No. Frobenius makes a statement about the subbundles of a tangent bundle, ##S\subseteq TM.## The tangent space can be considered as a Lie algebra. If we now want to integrate along a subbundle, we have to make sure that this can be done in a left-invariant way, i.e. that the tangent vectors we are integrating along build a Lie subalgebra which means that ##[S,S]\subseteq S.## If they commute, then this condition is automatically fulfilled, but ##[S,S]=0## is not a necessary condition.
 
  • #3
fresh_42 said:
No. Frobenius makes a statement about the subbundles of a tangent bundle, ##S\subseteq TM.## The tangent space can be considered as a Lie algebra.
Sorry, I edited the post. Actually the question is: for any smooth frame field the Frobenius's condition about the closure of Lie bracket always holds true.
 
Last edited:
  • #4
My question is related to the following claim in Wiki page for Straightening theorem

The Frobenius theorem in differential geometry can be considered as a higher-dimensional generalization of this theorem.

Since Frobenius theorem gives condition for completely integrability of subbundles, how is related to the above theorem/claim ?
 
  • #5
cianfa72 said:
My question is related to the following claim in Wiki page for Straightening theorem
Since Frobenius theorem gives condition for completely integrability of subbundles, how is related to the above theorem/claim ?

The theorem says: If we have a single vector field ##X\in TM## and a flow ##f## along ##X,## then we can find local coordinates such that ##X## is a partial derivative, and we can call this the first coordinate.

Partial - in fact any - derivatives are directional derivatives. The theorem only says that if we follow a flow ##f## through the vector field ##X## then we can define a local coordinate system such that this direction is the partial derivative of the first coordinate - for any flow at that point!

Following a flow in a second direction ##Y## would give us a second local coordinate ##Y=\dfrac{\partial }{\partial y_2}.## Following first ##X## and then ##Y## results in ##Y(X(f))## and the other way around in ##X(Y(f)).## The resulting distance is ##[X,Y](f).## Frobenius says that this is completely integrable if and only if ##[X,Y] \in \operatorname{span}\{X,Y\}.## We would always get ##[X,Y]\in TM## but this wouldn't be called integrable because the resulting distance cannot be described by the given two vector fields aka given two partial cooerdinates anymore.

The crucial part of these theorems is, that they are independent of a certain flow ##f.## The vector fields can be considered as differential operators, namely partial derivatives independent of ##f.##
 
  • Like
Likes WWGD and jbergman
  • #6
fresh_42 said:
The theorem says: If we have a single vector field ##X\in TM## and a flow ##f## along ##X,## then we can find local coordinates such that ##X## is a partial derivative, and we can call this the first coordinate.
Can you define what you mean by flow ##f## ?

fresh_42 said:
The theorem only says that if we follow a flow ##f## through the vector field ##X## then we can define a local coordinate system such that this direction is the partial derivative of the first coordinate - for any flow at that point!

Following a flow in a second direction ##Y## would give us a second local coordinate ##Y=\dfrac{\partial }{\partial y_2}.## Following first ##X## and then ##Y## results in ##Y(X(f))## and the other way around in ##X(Y(f)).## The resulting distance is ##[X,Y](f).## Frobenius says that this is completely integrable if and only if ##[X,Y] \in \operatorname{span}\{X,Y\}.## We would always get ##[X,Y]\in TM## but this wouldn't be called integrable because the resulting distance cannot be described by the given two vector fields aka given two partial coordinates anymore.
Ok, however I don't grasp how Frobenius's theorem can be thought of as a generalization of Straightening theorem.
 
  • #7
The flow is a function whose derivatives are all vectors of ##X##. Here is a self made example:

Predator_Prey_08.png


Consider the blue arrows as the vector field and the red line, a function, as the flow through ##X##. It follows the blue arrows (tangents) along ##\{(p,f(p))\}.## At a certain point ##p##, we have ##X_p(f)=\left. \dfrac{\partial f}{\partial x_1}\right|_{p}dx_1 +\left.\dfrac{\partial f}{\partial x_2}\right|_{p}dx_2## or in general independent of the location ##p##
$$
X(f)=\dfrac{\partial f}{\partial x_1}dx_1 +\dfrac{\partial f}{\partial x_2}dx_2
$$
or independent of the red function
$$
X=\dfrac{\partial }{\partial x_1}dx_1 +\dfrac{\partial }{\partial x_2}dx_2
$$
This also shows that Frobenius is difficult to draw. We can only draw pictures of two-dimensional vector fields. A proper submanifold would be one-dimensional. An integrable subbundle of dimension one is abelian, i.e. ##[\alpha X,\beta X]=0.## To get an interesting example, we need a three-dimensional manifold with a three-dimensional tangent bundle and a two-dimensional non-abelian subbundle. I have no idea how to draw a three-dimensional tangent bundle so that it doesn't get messy.

##dx_1## and ##dx_2## are the global basis vectors here. If you want to use this picture as an example for the straightening theorem for vector fields which you can do since all tangents are one-dimensional. Take a point where ##X\neq 0,## i.e. where an arrow, a tangent exists. Then you can define this arrow as the first component ##y_1## of a local coordinate system and ##X=\dfrac{\partial }{\partial y_1}## as the differential operator there that makes the direction of ##X## the first component of the tangent vector. The second component ## \dfrac{\partial }{\partial
y_2}## there will be zero (in an orthogonal coordinate system).
 
Last edited:
  • #8
cianfa72 said:
Sorry, I edited the post. Actually the question is: for any smooth frame field the Frobenius's condition about the closure of Lie bracket always holds true.
The Frobenius theorem trivially is always true for a frame field since it always holds for the Tangent Bundle of a manifold. Typically as @fresh_42 said we are interested in subbundles, so if you had three vector fields and they were closed under the bracket then they satisfy the Frobenius condition for the subbundle spanned by them.
 
  • Like
Likes cianfa72
  • #9
cianfa72 said:
Can you define what you mean by flow ##f## ?


Ok, however I don't grasp how Frobenius's theorem can be thought of as a generalization of Straightening theorem.
There is a picture on page 500 of Lee that shows this. Basically there is an integral submanifold such that the subbundle is tangent to it and associated charts in which the intersections of the submanifold are flat like k-planes.
 
  • #10
fresh_42 said:
The flow is a function whose derivatives are all vectors of ##X##. Here is a self made example:

View attachment 348564

Consider the blue arrows as the vector field and the red line, a function, as the flow through ##X##.
Sorry, is the function ##f## defined from the manifold to the real numbers? If yes, is the red line basically a "level set" of such function ##f## ?

fresh_42 said:
It follows the blue arrows (tangents) along ##\{(p,f(p))\}##.
Why do you mean with ##p## ? In your bi-dimensional example is ##p## a point in the plane ?

Btw, I saw the definition of flow from Wikipedia and to me it seems different from your.
 
Last edited:
  • #11
cianfa72 said:
Sorry, is the function ##f## defined from the manifold to the real numbers?
It is defined as ##f\, : \,M\longrightarrow M.## I used a picture that I already had, so I used a level line ##f\, : \,M\longrightarrow \mathbb{R}.##
cianfa72 said:
If yes, is the red line basically a "level set" of such function ##f## ?
It is only a very simplifying picture. In this case, it was a level set of a system of differential equations (Lotka-Volterra), but this is irrelevant. We are talking about local properties, so cut out a tiny section if you like and make ##f## a local function. Whatever. It was meant to give you an impression.
cianfa72 said:
Why do you mean with ##p## ? In your bi-dimensional example is ##p## a point ...
Yes. The tangent bundle (vector field) is
$$\begin{equation*} TM= \bigcup_{p \in M}\left. TM\right|_p \end{equation*},$$
the formal product of all points ##p\in M## and all the tangent spaces ##T_pM## at those points.
cianfa72 said:
... in the plane ?
No, on the manifold. I only drew tangent vectors so you can only reconstruct the shape of the manifold by the level lines and the tangents. The manifold would look like
Predator_Prey_10.png

But as we talk about tangent bundles, it is not necessary to consider the shape of the manifold.
cianfa72 said:
Btw, I saw the definition of flow from Wikipedia and to me it seems different from you
What you quoted is a parameterization of my function. We start at ##p=\varphi (p,0) ## and go in small steps of ##t## from point to point along the directions given by the vector field.

Have you studied the entire article that you quoted, esp. https://en.m.wikipedia.org/wiki/Flow_(mathematics)#Algebraic_equation?
It also says "Informally, a flow may be viewed as a continuous motion of points over time" which is a motion on the manifold, parameterized by time. The tangents of such a motion are the vectors of the vector field: we flow through the vector field. The parameter time builds the additive real group. Here is another definition:
https://www.physicsforums.com/insights/pantheon-derivatives-part-iv/#A-–-Definitions

The term flow came from the differential equations that described the motion of a partial in a fluid. Here we go backward and start with a tangent bundle and sort of reconstruct the motion of the particle from there. The term flow remained as it associates that particle flowing in a fluid following the vector field determined by the motion of the entire fluid. In the end it is a function ##f=\varphi^t\, : \,M\longrightarrow M.##
 
  • Like
Likes cianfa72
  • #12
fresh_42 said:
The tangent bundle (vector field) is
$$\begin{equation*} TM= \bigcup_{p \in M}\left. TM\right|_p \end{equation*},$$ the formal product of all points ##p\in M## and all the tangent spaces ##T_pM## at those points.

No, on the manifold. I only drew tangent vectors so you can only reconstruct the shape of the manifold by the level lines and the tangents. The manifold would look like
View attachment 348580
Ok, so is your ##\{(p, f(p))\}## in post#7 an element of the tangent bundle ##TM## ?

fresh_42 said:
In the end it is a function ##f=\varphi^t\, : \,M\longrightarrow M.##
Ok, so your function ##f=\varphi^t\, : \,M\longrightarrow M## is parametrized by the parameter ##t##. However I don't understand your notation ##Y(X(f)##. ##X## as vector field should act on functions ##f\, : \,M\longrightarrow \mathbb{R}##.
 
Last edited:
  • #13
cianfa72 said:
Ok, so is your ##\{(p, f(p))\}## in post#7 an element of the tangent bundle ##TM## ?
No, this was just a point on the manifold with its function value. E.g. a point on Earth and its air pressure. The tangent bundle points in the direction along isobars, since as you correctly observed, my function was a level set, an isobar. My ##f## in the picture is a walk on Earth along isobars. The tangent vectors tell me where I have to go to from point to point, parameterized by time, the second coordinate of my flow (walk).

cianfa72 said:
Ok, so your function ##f:\,M\longrightarrow M## is parametrized by the parameter ##t##
Yes, but: Forget the manifold! Forget the function! It might well be that I made some technical errors since I only wanted to describe the situation, and that was a tangent bundle, neither a function nor a manifold. If you want to have it rigorous then read all the links above, yours and mine, or even better a textbook or lecture note.

You have to make a decision:

Either you want to know how it works in general, then you must not care about ##M## or ##f## because this entire thread is about ##X.## No ##p\in M##, no ##f\in C^\infty (M).## Then you can use the image as an example of a vector field and try to understand the two theorems.

Or you want technical precision. In that case, you have to provide the entire framework including particularly the notations. Every single author writes his derivatives differently. And all those names: frames, vector bundles, subbundles, etc. are finally coordinate systems and derivatives. Best would be a publically available lecture note with the two theorems you mentioned - no Wikipedia, no images, no insight articles. This way, we could seriously talk and use the same language.

The straightening theorem says that a vector field can locally be described by a differential operator of the first coordinate by a suitable coordinate system.

The Wikipedia link is insufficient for a qualified discussion. It is poorly written. "Let ##f=(f_1,\ldots.f_n).##" What does that mean? ##n## components of what function? A map from ##M## or from ##\mathbb{R}^m## with ##m=n ## or ##m\neq n## to what? Obviously coordinates. But that has to be guessed from context. You cannot read it there. I seriously recommend avoiding Wikipedia if you want to learn something. Choose a lecture note. There are hundreds available around the world.

The Frobenius theorem says that a vector field is completely integrable if it forms a subalgebra of the Lie algebra, i.e. of the tangent space.

The same is true here: do not consult Wikipedia, search for a lecture note. The search key "Frobenius Theorem + differential geometry + pdf" would probably do. If not, try differential topology instead.

You have asked why the straightening theorem is called the one-dimensional case of the Frobenius theorem. It actually says that the Frobenius theorem is a generalization for higher dimensions, i.e. the other way around. The straightening theorem says we can write ##X=\sum_{k=1}^{n} c_k \dfrac{\partial }{\partial x_k}## as ##X=\dfrac{\partial }{\partial y_1}## by a suitable choice of local coordinates. This is clearly completely integrable since ##\left[\alpha\dfrac{\partial }{\partial y_1},\beta\dfrac{\partial }{\partial y_1}\right]=0.##

This is no longer automatically true in higher dimensions. If we have ##X=\sum_{k=1}^{n} c_k \dfrac{\partial }{\partial x_k}## and ##Y=\sum_{k=1}^{n} d_k \dfrac{\partial }{\partial x_k}## then we cannot assume that the two vector fields commute (which answers your very first question). We have a mess of mixed partial derivatives in ##[X,Y]##, so the order of integrations matters. Complete integrability means that we only want to integrate using ##X## and ##Y##. The mixed partial derivatives must therefore be expressable as linear combinations of ##X## and ##Y## again, i.e. ##[X,Y] \in \operatorname{span}\{X,Y\}.## That's the Frobenius theorem for two vector fields. For even more (linear independent) vector fields the theorem says that we have complete integrability (involutivity) if those vector fields build a Lie subalgebra of the tangent space. If you look at the pictures on Wikipedia then you will notice how hard it is to draw an image for the Frobenius theorem where ##[X,Y]\neq 0.## I assume that you won't find one in lecture notes either. We run out of dimensions too quickly to get an impression of, say four linear independent vectors.

If we consider both theorems in the light of differential equation systems, then the straightening theorem allows us a solution with only one integration along ##y_1.## If we have more independent variables, then we will have likely to integrate over more variables. But we aren't allowed to switch orders: ##[X,Y]=Z\neq 0.## That gives us the next variable ##Z## and so on. However, if ##[X,Y]=\alpha X+\beta Z## then we do not need new variables when we switch the order. This condition is equivalent to the statement that ##\operatorname{span}\{X,Y\}## is a two-dimensional Lie algebra, a Lie subalgebra of all possible vector fields. It allows us integrations without having to add new variables.

The equations are more complicated if we have to solve for more than two variables. However, as long as the corresponding vector fields build a Lie subalgebra, we at least don't have to add even more variables if we switch the order of integration.
 
  • #14
fresh_42 said:
No, this was just a point on the manifold with its function value. E.g. a point on Earth and its air pressure. The tangent bundle points in the direction along isobars, since as you correctly observed, my function was a level set, an isobar. My ##f## in the picture is a walk on Earth along isobars. The tangent vectors tell me where I have to go to from point to point, parameterized by time, the second coordinate of my flow (walk).
Ok, so your picture in #12 is not the manifold: it is the graph of the function ##f: M \longrightarrow \mathbb{R}##. The manifold there is actually the plane (the Earth in your example).

fresh_42 said:
The straightening theorem says that a vector field can locally be described by a differential operator of the first coordinate by a suitable coordinate system.

The Frobenius theorem says that a vector field is completely integrable if it forms a subalgebra of the Lie algebra, i.e. of the tangent space.

You have asked why the straightening theorem is called the one-dimensional case of the Frobenius theorem. It actually says that the Frobenius theorem is a generalization for higher dimensions, i.e. the other way around. The straightening theorem says we can write ##X=\sum_{k=1}^{n} c_k \dfrac{\partial }{\partial x_k}## as ##X=\dfrac{\partial }{\partial y_1}## by a suitable choice of local coordinates. This is clearly completely integrable since ##\left[\alpha\dfrac{\partial }{\partial y_1},\beta\dfrac{\partial }{\partial y_1}\right]=0.##

This is no longer automatically true in higher dimensions. If we have ##X=\sum_{k=1}^{n} c_k \dfrac{\partial }{\partial x_k}## and ##Y=\sum_{k=1}^{n} d_k \dfrac{\partial }{\partial x_k}## then we cannot assume that the two vector fields commute (which answers your very first question). We have a mess of mixed partial derivatives in ##[X,Y]##, so the order of integrations matters. Complete integrability means that we only want to integrate using ##X## and ##Y##. The mixed partial derivatives must therefore be expressable as linear combinations of ##X## and ##Y## again, i.e. ##[X,Y] \in \operatorname{span}\{X,Y\}.## That's the Frobenius theorem for two vector fields. For even more (linear independent) vector fields the theorem says that we have complete integrability (involutivity) if those vector fields build a Lie subalgebra of the tangent space.
Ah ok, so the Frobenius's theorem as generalization of Straightening theorem is about the completly integrability conditions (automatically satisfied for a single non-zero smooth vector field since ##[X,X]=0##).
 
  • Like
Likes fresh_42
  • #15
fresh_42 said:
If we consider both theorems in the light of differential equation systems, then the straightening theorem allows us a solution with only one integration along ##y_1.## If we have more independent variables, then we will have likely to integrate over more variables. But we aren't allowed to switch orders: ##[X,Y]=Z\neq 0.## That gives us the next variable ##Z## and so on. However, if ##[X,Y]=\alpha X+\beta Z## then we do not need new variables when we switch the order. This condition is equivalent to the statement that ##\operatorname{span}\{X,Y\}## is a two-dimensional Lie algebra, a Lie subalgebra of all possible vector fields. It allows us integrations without having to add new variables.
Sorry, can you elaborate this point regarding the order of integration (for differential equation systems) that requires to add new variables when vector fields do not commute ? Thanks.
 
  • #16
cianfa72 said:
Sorry, can you elaborate this point regarding the order of integration (for differential equation systems) that requires to add new variables when vector fields do not commute ? Thanks.
##\int dX\,dY \stackrel{i.g.}{\neq } \int dY\,dX## in higher dimensions than one. If we want to solve a system of differential equations, we must get the difference under control whether we first integrate ##X## and then ##Y## or the other way around. If ##\{X,Y\}## define a Lie subalgebra, then this difference can be expressed in terms of ##X## and ##Y##. If not, then ##XY-YX=[X,Y]=Z\not\in \operatorname{span}\{X,Y\}## and we get something like ##\int dX\,dY = \int dY\,dX + \int dZ## introducing a further vector filed ##Z,## a new variable, a new vector field. It is still part of the tangent field, but no longer in the subspace spanned by ##X## and ##Y.##

I suggest that you look at my image again. Here is the example worked out with two spatial coordinates ##x,y## and time ##t## and the tangent vectors ##(\dot x,\dot y).##
https://www.physicsforums.com/insights/differential-equation-systems-and-nature/#Predator-Prey-Model

Unfortunately, the system is defined by time so we have only the vector field ##X=\dfrac{d}{dt}## at prior. The one-dimensional case doesn't tell us a lot, so let's get rid of time and consider the spatial function, my level set, say
$$
F(x,y)=\dfrac{e^{x}}{x^7}\cdot \dfrac{e^{2y}}{y^{10}} =e^{-C}=2
$$
where we can now investigate at least ##\dfrac{\partial }{\partial x}## and ##\dfrac{\partial }{\partial y}.## I suggest that you play with that example and restrict your view on some local neighborhood, say around the point ##p=(p_x,p_y)=(1.49147\, , \,1)## to make things easier, and to have a locally well-defined function and no closed curve. Of course, ##\dfrac{\partial }{\partial x},\dfrac{\partial }{\partial y}## span the two-dimensional tangent space which leaves us only two possible one-dimensional Lie subalgebras, and Frobenius becomes more or less trivial. That's what I meant by running out of dimensions for images. However, you can take time back in and gain a third dimension (which cannot be seen except for imagining following some flow ##F(x(t),y(t)).##

All other explanations or elaborations require that you first have to provide notations that allow me
a) to speak of an actual integration of a vector field,
b) to connect a system of differential equations with vector fields,
or me to prepare a lecture.

The key to the entire subject is to distinguish between ##X\, , \,X(f)\, , \,X_p\, , \,X_p(f)## or whatever you use to write vector bundles, vector fields, tangent vectors, and components of the slope. Look up two books and you get four conventions.
 
  • #17
fresh_42 said:
##\int dX\,dY \stackrel{i.g.}{\neq } \int dY\,dX## in higher dimensions than one. If we want to solve a system of differential equations, we must get the difference under control whether we first integrate ##X## and then ##Y## or the other way around. If ##\{X,Y\}## define a Lie subalgebra, then this difference can be expressed in terms of ##X## and ##Y##.
Sorry, what does mean the operator ##\stackrel{i.g.}{\neq }## ?

fresh_42 said:
##XY-YX=[X,Y]=Z\not\in \operatorname{span}\{X,Y\}## and we get something like ##\int dX\,dY = \int dY\,dX + \int dZ## introducing a further vector filed ##Z,## a new variable, a new vector field. It is still part of the tangent field, but no longer in the subspace spanned by ##X## and ##Y.##
You mean ##Z## (that is a vector field) is still a smooth section of the tangent bundle.

fresh_42 said:
The one-dimensional case doesn't tell us a lot, so let's get rid of time and consider the spatial function, my level set, say
$$
F(x,y)=\dfrac{e^{x}}{x^7}\cdot \dfrac{e^{2y}}{y^{10}} =e^{-C}=2
$$ where we can now investigate at least ##\dfrac{\partial }{\partial x}## and ##\dfrac{\partial }{\partial y}.## I suggest that you play with that example and restrict your view on some local neighborhood, say around the point ##p=(p_x,p_y)=(1.49147\, , \,1)## to make things easier, and to have a locally well-defined function and no closed curve. Of course, ##\dfrac{\partial }{\partial x},\dfrac{\partial }{\partial y}## span the two-dimensional tangent space which leaves us only two possible one-dimensional Lie subalgebras, and Frobenius becomes more or less trivial. That's what I meant by running out of dimensions for images.
Here the point is that, since the manifold is bidimensional, the vector field ##X,Y## span the two-dimensional Lie algebra.
 
  • #18
cianfa72 said:
Sorry, what does mean the operator ##\stackrel{i.g.}{\neq }## ?
It means: in general. It is usually unequal but of course, there are examples when this is equal.
cianfa72 said:
You mean ##Z## (that is a vector field) is still a smooth section of the tangent bundle.
Yes.
cianfa72 said:
Here the point is that, since the manifold is bidimensional, the vector field ##X,Y## span the two-dimensional Lie algebra.
Yes, but there are two two-dimensional Lie algebras, one is abelian and for the other one we have ##[X,Y]=2Y.## I have no idea how to use this as vector fields, esp. as ##\left[\dfrac{\partial }{\partial x},\dfrac{\partial }{\partial y}\right]=0.## But we can have for matrices
$$
\left[\begin{pmatrix}1&0\\0&-1\end{pmatrix}\, , \,\begin{pmatrix}0&1 \\0&0\end{pmatrix}\right]=\begin{pmatrix}0&2\\0&0\end{pmatrix}
$$
 
  • #19
fresh_42 said:
Yes, but there are two two-dimensional Lie algebras, one is abelian and for the other one we have ##[X,Y]=2Y.## I have no idea how to use this as vector fields, esp. as ##\left[\dfrac{\partial }{\partial x},\dfrac{\partial }{\partial y}\right]=0.##
Ok, I believe for vector fields only the abelian Lie algebra actually makes sense.
 
  • #20
cianfa72 said:
Ok, I believe for vector fields only the abelian Lie algebra actually makes sense.
No, not at all. Consider the Lie group ##\operatorname{SL}(2,\mathbb{R}).## Its left-invariant vector fields build a Lie algebra, ##\mathfrak{sl}(2.\mathbb{R})## which is real, three-dimensional, and simple, i.e. basically the opposite of abelian. And its Borel subalgebra is spanned by exactly the two matrices in post #18.

As this Lie subalgebra is a matrix algebra, it is the Lie algebra of a two-dimensional Lie group again. The Lie group is generated by ##\begin{pmatrix}e^t&0\\0&e^{-t}\end{pmatrix}## and ##\begin{pmatrix}1&c\\0&1\end{pmatrix}.## So all you have to do now is compute the left-invariant vector fields of this Lie group and you get a two-dimensional Lie algebra with the multiplication in post #18.

Here is an example how it is done:
https://www.physicsforums.com/insights/pantheon-derivatives-part-iv/#B-–-Left-Invariant-Vector-Fields-and-GL-n

This is also an example where you can test Frobenius with:

Manifold (smooth, real, three-dimensional): ##\operatorname{SL}(2,\mathbb{R})=\left\{A\in \operatorname{GL}(2,\mathbb{R})\,|\,\operatorname{det}(A)=1\right\}##
Vector fields (three-dimensional, non-abelian): ##\mathfrak{sl}(2,\mathbb{R})=\left\{A\in \mathfrak{gl}(2,\mathbb{R})\,|\,\operatorname{trace}(A)=0\right\}##
Lie subgroup (real, two-dimensional): ##\bigl\langle \begin{pmatrix}e^t&0\\0&e^{-t}\end{pmatrix}, \begin{pmatrix}1&c\\0&1\end{pmatrix}\bigr\rangle ##
Lie subalgebra (real, two-dimensional): ##\operatorname{span}\left\{X=\begin{pmatrix}1&0\\0&-1\end{pmatrix}\, , \,Y=\begin{pmatrix}0&1\\0&0\end{pmatrix}\, , \,[X,Y]=2Y\right\}##
 
Last edited:
  • #21
fresh_42 said:
No, not at all. Consider the Lie group ##\operatorname{SL}(2,\mathbb{R}).## Its left-invariant vector fields build a Lie algebra, ##\mathfrak{sl}(2.\mathbb{R})## which is real, three-dimensional, and simple, i.e. basically the opposite of abelian. And its Borel subalgebra is spanned by exactly the two matrices in post #18.

As this Lie subalgebra is a matrix algebra, it is the Lie algebra of a two-dimensional Lie group again. The Lie group is generated by ##\begin{pmatrix}e^t&0\\0&e^{-t}\end{pmatrix}## and ##\begin{pmatrix}1&c\\0&1\end{pmatrix}.## So all you have to do now is compute the left-invariant vector fields of this Lie group and you get a two-dimensional Lie algebra with the multiplication in post #18.
Sorry, I'm not an expert in this area. I got that the matrix basis $$\left\{ \begin{pmatrix}e^t&0\\0&e^{-t}\end{pmatrix}, \begin{pmatrix}1&c\\0&1\end{pmatrix} \right\}$$ generates by exponential map a two-dimensional Lie group. Then if one considers its left-invariant vector fields (as explained in your link), one gets a Lie algebra (i.e. a vector space with a (closed) binary operation (actually the Lie bracket) satisfying the axioms of Lie algebra ).

What does it mean that such a Lie algebra has the multiplication (i.e. Lie bracket ?) as in your post #18 ? Do you mean the binary operation between ##X## and ##Y## is defined by ##[X,Y]=2Y## ?
 
Last edited:
  • #22
cianfa72 said:
Sorry, I'm not an expert in this area. I got that the matrix basis $$\left\{ \begin{pmatrix}e^t&0\\0&e^{-t}\end{pmatrix}, \begin{pmatrix}1&c\\0&1\end{pmatrix} \right\}$$ generates by exponential map a two-dimensional Lie group.
These two matrices generate the Lie group. That means in the case of groups, that the group elements are products of any finite length of these two matrices and their inverses. The group generators are already the result of exponentiation:
$$
\exp X(t)=\exp\begin{pmatrix}t&0\\0&-t\end{pmatrix}=\begin{pmatrix}e^t&0\\0&e^{-t}\end{pmatrix}\, , \,
\exp Y(c)=\exp\begin{pmatrix}0&c\\0&0\end{pmatrix}=\begin{pmatrix}1& c\\0&1\end{pmatrix}
$$
cianfa72 said:
Then if one considers its left-invariant vector fields (as explained in your link), one gets a Lie algebra (i.e. a vector space with a (closed) binary operation (actually the Lie bracket) satisfying the axioms of Lie algebra ).

What does it mean that such a Lie algebra has the multiplication (i.e. Lie bracket ?) as in your post #18 ? Do you mean the binary operation between ##X## and ##Y## is defined by ##[X,Y]=2Y## ?
You have two parameters which are the two variables. The vector fields do not commute since
$$
\left[X(t),Y(c)\right]=X(t)\circ Y(c)-Y(c)\circ X(t)=2t \,Y(c)
$$
It means that
$$
\left[X(t),Y(c)\right](f)=X(t)(Y(c)(f))-Y(c)(X(t)(f))=2t \,Y(c)(f)
$$
and at a given location ##p## that
$$
\left[X_p(t),Y_p(c)\right](f)=X_p(t)(Y_p(c)(f))-Y_p(c)(X_p(t)(f))=2t \,Y_p(c)(f).
$$
 
  • #23
fresh_42 said:
These two matrices generate the Lie group. That means in the case of groups, that the group elements are products of any finite length of these two matrices and their inverses. The group generators are already the result of exponentiation:
$$
\exp X(t)=\exp\begin{pmatrix}t&0\\0&-t\end{pmatrix}=\begin{pmatrix}e^t&0\\0&e^{-t}\end{pmatrix}\, , \,
\exp Y(c)=\exp\begin{pmatrix}0&c\\0&0\end{pmatrix}=\begin{pmatrix}1& c\\0&1\end{pmatrix}
$$
Ok, therefore the group generators are elements of the Lie group and not of the Lie algebra. Above ##X## and ##Y## are actually elements of a basis of the Lie algebra (i.e. the algebra defined at the tangent space of the Lie group at the identity element). In particular $$X = \begin{pmatrix}1&0\\0&-1\end{pmatrix}\, , \,Y=\begin{pmatrix}0&1 \\0&0\end{pmatrix}$$
fresh_42 said:
You have two parameters which are the two variables. The vector fields do not commute since
$$
\left[X(t),Y(c)\right]=X(t)\circ Y(c)-Y(c)\circ X(t)=2t \,Y(c)
$$
It means that
$$
\left[X(t),Y(c)\right](f)=X(t)(Y(c)(f))-Y(c)(X(t)(f))=2t \,Y(c)(f)
$$
and at a given location ##p## that
$$
\left[X_p(t),Y_p(c)\right](f)=X_p(t)(Y_p(c)(f))-Y_p(c)(X_p(t)(f))=2t \,Y_p(c)(f).
$$
Sorry, are the above ##X(t)## and ##Y(t)## actually the left invariant vector field w.r.t. the Lie group?

p.s. Sorry for the possibile confusion :rolleyes:
 
  • #25
cianfa72 said:
Sorry, are the above ##X(t)## and ##Y(t)## actually the left invariant vector field w.r.t. the Lie group?
Well, they should be. Otherwise, some important theorems would fail.

Calculate them as an exercise. You have two Lie group elements, already parametrized by time (t) and boost (c). The determinants of the exponentials are one, and the traces of the presumed vector fields are zero. Take paths along time and boost and calculate their derivatives at
$$
G \ni \begin{pmatrix}1&0\\0&1\end{pmatrix} \stackrel{1:1}{\longleftrightarrow } \begin{pmatrix}0&0\\0&0\end{pmatrix} \in \mathfrak{g}
$$
You can also calculate the derivatives at other points ##p_0##, e.g.
$$
G \ni \underbrace{\begin{pmatrix}e^{t_0}&c_0\\0&e^{-t_0}\end{pmatrix}}_{=:p_0} \stackrel{1:1}{\longleftrightarrow } \begin{pmatrix}0&0\\0&0\end{pmatrix} \in \mathfrak{g}
$$
and check left-invariance.

It is an easy non-trivial example that allows you calculations that do not explode and give you an impression of all the technical terms. You can see how I did it with my notation in
https://www.physicsforums.com/insights/journey-manifold-su2mathbbc-part/#5-Tangent-Bundle
You might have another terminology. Notation and wording depend heavily on the author.

I listed a few of them in that article:
  1. first derivative ##L'_g : x \longmapsto \alpha(x)##
  2. differential ##dL_g = \alpha_x \cdot d x##
  3. linear approximation of ##L_g## by ##L_g(x_0+\varepsilon)=L_g(x_0)+J_{x_0}(L_g)\cdot \varepsilon +O(\varepsilon^2) ##
  4. linear mapping (Jacobi matrix) ##J_{x}(L_g) : v \longmapsto \alpha_{x} \cdot v##
  5. vector (tangent) bundle ##(p,\alpha_{p}\;d x) \in (U\times \mathbb{R},\mathbb{R},\pi)##
  6. ##1-##form (Pfaffian form) ##\omega_{p} : v \longmapsto \langle \alpha_{p} , v \rangle ##
  7. cotangent bundle ##(p,\omega_p) \in (U,T^*U,\pi^*)##
  8. section of ##(U\times \mathbb{R},\mathbb{R},\pi)\, : \,\sigma \in \Gamma(U,TU)=\Gamma(U) : p \longmapsto \alpha_{p}##
  9. If ##f,g : U \mapsto \mathbb{R}## are smooth functions, then \begin{equation*}
    \begin{aligned}D_xL_y (f\cdot g) &= \alpha_x (f\cdot g)' \\&= \alpha_x (f'\cdot g + f \cdot g') \\&= D_xL_y(f)\cdot g + f \cdot D_xL_y(g) \end{aligned} \end{equation*} and ##D_xL_y## is a derivation on ##C^\infty(\mathbb{R})##.
  10. ##L^*_x(\alpha_y)=\alpha_{xy}## is the pullback section of ##\sigma: p \longmapsto \alpha_p## by ##L_x##.
Whatever you call them, they are simply a slope at some point, a directional derivative at some point, a linear transformation of the direction at some point, a tangent space of many directions at some point, a bundle of tangent spaces of many directions at many points. They all have different names, different notations, and different degrees of abstraction. However, all started with
$$
\left. \dfrac{\partial }{\partial x_k}\right|_{x=p}f
$$
The variables are: dimension of ##x##, components of ##x##, location ##p##, direction of differentiation ##v_k##, and function ##f.## That leaves many combinations and generalizations.

I truly recommend sticking to one source, preferably your textbook or lecture note, and performing some calculations, e.g. with the example or links above (in your technical language). The risk of being confused by different authors is not neglectable.
 
Last edited:
  • Like
Likes jbergman and Euge
  • #26
I'm not sure about the following: ##SL(2,\mathbb R)## is a three-dimensional Lie group. Therefore there should be 3 generators of the group itself.

That means that $$\left\{ \begin{pmatrix}e^t&0\\0&e^{-t}\end{pmatrix}, \begin{pmatrix}1&c\\0&1\end{pmatrix} \right\}$$ are actually generators of a Lie subgroup of ##SL(2,\mathbb R)##, right?
 
  • Like
Likes jbergman
  • #27
Ok, the Lie algebra associated to the Lie subgroup in post#26 is spanned from $$\left \{ X = \begin{pmatrix}1&0\\0&-1\end{pmatrix}\, , \,Y=\begin{pmatrix}0&1 \\0&0\end{pmatrix} \right \}$$ The "binary/multiplication operation" of such algebra is by definition the Lie bracket. In this specific case the Lie bracket of the basis elements is $$[X,Y]=XY - YX=2Y$$ Of course it is closed in the underlying vector space structure (i.e. the Lie bracket is linear combination of basis vectors).
 
  • #28
cianfa72 said:
I'm not sure about the following: ##SL(2,\mathbb R)## is a three-dimensional Lie group. Therefore there should be 3 generators of the group itself.
$$
\operatorname{SL}(2,\mathbb{R})=\bigl\langle \begin{pmatrix}e^t&0\\0&e^{-t}\end{pmatrix}\, , \, \begin{pmatrix}1&c_+\\0&1\end{pmatrix} \, , \,\begin{pmatrix}1&0\\c_-&1\end{pmatrix}\bigr\rangle
$$
cianfa72 said:
That means that $$\left\{ \begin{pmatrix}e^t&0\\0&e^{-t}\end{pmatrix}, \begin{pmatrix}1&c\\0&1\end{pmatrix} \right\}$$ are actually generators of a Lie subgroup of ##SL(2,\mathbb R)##, right?
No. These two only generate a Borel subgroup of ##\operatorname{SL}(2,\mathbb R)##, a maximal solvable subgroup. It generates the smallest non-abelian Lie algebra and therefore the smallest "interesting" example.
 
  • Like
Likes cianfa72
  • #29
Last edited:
  • #30
fresh_42 said:
$$\left\{ \begin{pmatrix}e^t&0\\0&e^{-t}\end{pmatrix}, \begin{pmatrix}1&c\\0&1\end{pmatrix} \right\}$$
No. These two only generate a Borel subgroup of ##\operatorname{SL}(2,\mathbb R)##, a maximal solvable subgroup. It generates the smallest non-abelian Lie algebra and therefore the smallest "interesting" example.
Sorry, but the above Borel subgroup of ##\operatorname{SL}(2,\mathbb R)## is not itself a Lie group ?
 
  • #31
cianfa72 said:
Sorry, but the above Borel subgroup of ##\operatorname{SL}(2,\mathbb R)## is not itself a Lie group ?
It is of course a Lie group. It's a two-dimensional smooth manifold. A typical element looks like
$$
\begin{pmatrix}e^t&c\\0&e^{-t}\end{pmatrix}
$$
The toral element, the diagonal matrix is a flow through time, and the unipotent element makes it non-abelian. The group is
$$
\biggl\langle \begin{pmatrix}e^t&c\\0&e^{-t}\end{pmatrix} \biggr\rangle =\left\{\left.\begin{pmatrix}x&y\\0&z\end{pmatrix}\,\right|\,x\cdot z= 1\right\} .
$$
 

Similar threads

Replies
73
Views
2K
Replies
20
Views
2K
Replies
1
Views
1K
Replies
51
Views
2K
Replies
20
Views
6K
Replies
2
Views
2K
Replies
4
Views
2K
Replies
1
Views
2K
Replies
4
Views
2K
Replies
16
Views
2K
Back
Top