Determine the function from a simple condition on its Jacobian matrix.

In summary, we are trying to prove that the isometry group of Minkowski spacetime is the Poincaré group, defined as the set of all affine maps ##x\mapsto \Lambda x+a## such that ##\Lambda^T\eta\Lambda=\eta##. We have a smooth function ##\phi:\mathbb R^4\to\mathbb R^4## satisfying ##J_\phi(x)^T\eta J_\phi(x)=\eta##, where ##J_\phi(x)## is the Jacobian matrix of ##\phi## at x, and ##\eta## is a given matrix. We want to prove that there exists a linear map ##\Lambda:\mathbb R
  • #1
Fredrik
Staff Emeritus
Science Advisor
Gold Member
10,877
423
##\phi:\mathbb R^4\to\mathbb R^4## is a smooth function such that ##J_\phi(x)^T\eta J_\phi(x)=\eta##, where ##J_\phi(x)## is the Jacobian matrix of ##\phi## at x, and ##\eta## is defined by
$$\eta=\begin{pmatrix}-1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{pmatrix}.$$ I want to prove that there's a linear ##\Lambda:\mathbb R^4\to\mathbb R^4## and an ##a\in\mathbb R^4## such that ##\phi(x)=\Lambda x+a## for all ##x\in\mathbb R^4##.

Not sure what to do. An obvious idea is to consider the components of the matrix equation. I'm labeling rows and columns from 0 to 3 (because I'm trying to prove a theorem in special relativity), and I'm using the notation ##\phi^\mu{},_{\nu}## for the ##\nu##th partial derivative of the ##\mu##th component of ##\phi##. We have $$\phi^\mu{},_{\rho}(x) \eta_{\mu\nu} \phi^\nu{},_{\sigma}(x) = \eta_{\rho\sigma},$$ and therefore (now dropping the x from the notation)
\begin{align}
-1 &=\eta_{00}=-(\phi^0{},_{0})^2+(\phi^1{},_{0})^2 +(\phi^2{},_{0})^2 +(\phi^3{},_{0})^2\\
0 &=\eta_{01} = -\phi^0{},_{0} \phi^0{},_{1} +\phi^1{},_{0} \phi^1{},_{1}+\phi^2{},_{0} \phi^2{},_{1} +\phi^3{},_{0} \phi^3{},_{1}\\
&\vdots
\end{align}
 
Physics news on Phys.org
  • #2
What about the following?

Consider ##\mathbb{R}^4## as a manifold. This is a pseudo-Riemannian manifold with the usual connection and with your pseudo-metric. The exponential map will in this case give us a diffeomorphism: ##exp:T_0\mathbb{R}^4\rightarrow \mathbb{R}^4##.

The condition you set means that ##\varphi## is an isometry in the sense of manifolds. A theorem of differential geometry now states that

[tex]\varphi(exp(X)) = exp(\varphi_*(X))[/tex]

So since the exponential map is an identity, we get that

[tex]\varphi(x) = \varphi_*(x)[/tex]

In particular, ##\varphi## itself will satisfy ##<\varphi(x),\varphi(y)> = <x,y>##, where ##<~,~>## is the pseudo-inner product.

Assume that ##\varphi(0) = 0##. Take ##\alpha\in \mathbb{R}##, then for all ##y\in \mathbb{R}^4##, we have

[tex]<\varphi(\alpha x),\varphi(y)> = \alpha<x,y> = <\alpha\varphi(x),\varphi(y)>[/tex]

So if ##\varphi## is surjective, we get that ##\varphi(\alpha x) = \alpha\varphi(x)##. By similar consideration, we can deduce that ##\varphi## is linear, as desired.
 
  • #3
Thank you. That's an interesting approach. I have forgotten everything I once knew about the exponential map, so I don't quite understand this right now, but I'm going to open up my Lee and refresh my memory.

Maybe I should have mentioned what I'm really trying to do. I want to prove rigorously that the isometry group of Minkowski spacetime (defined as ##\mathbb R^4## with the Minkowski metric) is the Poincaré group, defined as the set of all affine maps ##x\mapsto \Lambda x+a## such that ##\Lambda^T\eta\Lambda=\eta##. The problem I asked about came up after I proved that ##\phi## is an isometry if and only if its Jacobian ##J_\phi(x)## satisfies ##J_\phi(x)^T\eta J_\phi(x)=\eta## for all x. This seemed like a good start, because the fact that the right-hand side is independent of x makes it plausible that the Jacobian is too. So I was planning to prove that, and then try to use it to prove that ##\phi## is affine. Your approach makes it look like what I did was an unnecessary detour.

I will go and refresh my memory about the exponential map right away.
 
  • #4
Why not just solve for the killing vector fields of Minkowski space-time, which generate all possible one-parameter group of isometries of Minkowski space-time, and show that they must necessarily be coming out of the proper Poincare group? That is, note that in Minkowski space-time killing's equation becomes ##\partial_{(a}\xi_{b)} = 0##. Then, ##\partial_{c}\partial_{a}\xi^{b} + \partial_{c}\partial_{b}\xi^{a} = 0## so ##\partial_{c}\partial_{a}\xi^{b} + \partial_{c}\partial_{b}\xi^{a} = 0\Rightarrow \partial_{a}\partial_{c}\xi^{b} + \partial_{b}\partial_{c}\xi^{a} = -\partial_{a}\partial^{b}\xi_{c} - \partial_{b}\partial^{a}\xi_{c} = -\partial_{a}\partial^{b}\xi_{c} - \partial_{a}\partial^{b}\xi_{c} = 0## where I used the fact that ##\partial_{a}\xi^{b} = -\partial^{b}\xi_{a}## and that partials commute; hence ##\partial_{a}\partial_{b}\xi^{c} = 0##.

From this we see that ##\xi^{a} = M^{a}{}{}_{b}x^{b} + t ^{a}## where ##M^{a}{}{}_{b}## is a constant matrix and ##t ^{a}## is a constant vector. Hence every one parameter group of isometries of Minkowski space-time is in correspondence with a vector field ##\xi^{a}## of the above form. Note that ##\partial_{a}\xi_{b} + \partial_{b}\xi_{a} = \partial_{a}(M_{bc}x^{c}) + \partial_{b}(M_{ac}\xi^{c}) = M_{bc}\delta^{c}_{a} + M_{ac}\delta^{c}_{b} = M_{ba}+ M_{ab} = 0 \Rightarrow M_{ab} = -M_{ba}##.

It can be shown that ##M^{a}{}{}_{b}## is necessarily the generator of boosts and rotations and that ##t^{a}## is necessarily the generator of translations. The killing vector fields of Minkowski space-time form the Poincare Algebra and this generates the associated isometry group.
 
Last edited:
  • #5
micromass said:
Consider ##\mathbb{R}^4## as a manifold. This is a pseudo-Riemannian manifold with the usual connection and with your pseudo-metric. The exponential map will in this case give us a diffeomorphism: ##exp:T_0\mathbb{R}^4\rightarrow \mathbb{R}^4##.
There are two exponential maps, one for Riemannian manifolds, and one for Lie groups. We are dealing with a Lorentzian (pseudo-Riemannian) manifold, so it looks like the former is undefined. (Tangent vectors with "norm" 0 appear to be a problem). Some of your statements look like what Lee is saying about the exponential map on Lie groups, on pages 523-525 of the first edition of "Introduction to smooth manifolds", so I'm assuming that this is the exponential map you're talking about. For this one to apply, we have to view ##\mathbb R^4## as a Lie group. The obvious way to do that is to identify it with the translation group.

micromass said:
A theorem of differential geometry now states that

[tex]\varphi(exp(X)) = exp(\varphi_*(X))[/tex]
Page 525 has a formula ##\exp(tF_*X)=F(\exp tX)##, where F is a Lie group homomorphism. It looks like your formula with t=1 and ##F=\varphi##. But I only assumed that ##\phi## is an isometry. Here we seem to need to use that it's a Lie group homomorphism. The group operation on ##\mathbb R^4##, viewed as a Lie group by identification with the translation group, is addition. So it looks like we need to assume that ##\phi(x+y)=\phi(x)\phi(y)##, which is a major part of what we're trying to prove.

Did you mean a different theorem?
 
  • #6
WannabeNewton said:
Why not just solve for the killing vector fields of Minkowski space-time,
There are three reasons I haven't considered anything like that.

1. I don't even remember the definition of a killing field. (I will go get my Wald after I've finished typing this).

2. If my approach works, the proof will require a lot less knowledge of differential geometry from its readers than both yours and micromass' approaches. What I have done so far is to use the definition of "isometry" and the fact that the identity map is a coordinate system, to completely eliminate the differential geometry from the problem.

3. I think I have already done this my way a few years ago, but I didn't make any notes on this step, other than something like "it's boring, so I'm not going to type it up". This suggests that it's not too hard, but it's also possible that I did something very wrong back then.

WannabeNewton said:
##\partial_{c}\partial_{b}\xi^{a}+\partial_{b}\partial_{c}\xi^{a} = 0\Rightarrow \partial_{b}\partial_{c}\xi^{a} = 0##.
What is going on in this step? Edit: I copied the wrong step. :smile:

WannabeNewton said:
From this we see that ##\partial_{c}\xi^{a} = t ^{a}\Rightarrow \xi^{a} = M^{a}{}{}_{b}x^{b} + t ^{a}## where ##M^{a}{}{}_{b}## is a constant matrix and ##t ^{a}## is a constant vector. Hence every one parameter group of isometries of Minkowski space-time is in correspondence with a vector field ##\xi^{a}## of the above form. Note that ##\partial_{a}\xi_{b} + \partial_{b}\xi_{a} = \partial_{a}(M_{bc}x^{c}) + \partial_{b}(M_{ac}\xi^{c}) = M_{bc}\delta^{c}_{a} + M_{ac}\delta^{c}_{b} = M_{ba}+ M_{ab} = 0 \Rightarrow M_{ab} = -M_{ba}##.
Understood.

WannabeNewton said:
It can be shown that ##M^{a}{}{}_{b}## is necessarily the generator of boosts and rotations and that ##t^{a}## is necessarily the generator of translations. The killing vector fields of Minkowski space-time form the Poincare Algebra and this generates the associated isometry group.
This all sounds pretty complicated.
 
Last edited:
  • #7
Fredrik said:
3. I think I have already done this my way a few years ago, but I didn't make any notes on this step, other than something like "it's boring, so I'm not going to type it up". This suggests that it's not too hard, but it's also possible that I did something very wrong back then.
I see. By "my way" do you mean your method in post #1?

Fredrik said:
What is going on in this step?
Sorry I fudged a step there. I'll edit it.

Fredrik said:
This all sounds pretty complicated.
The translation part is easy since, in a global inertial coordinate system, the constant vector must have the form ##t^{\mu} = a^{\nu}(\partial_{\nu})^{\mu}## which is just a translation in an arbitrary direction in space-time. That ##M^{a}{}{}_{b}## corresponds to boosts and rotations takes more work. The stuff on the Poincare algebra and the overall connection to the Poincare group is something Srednicki and Peskin/Schroeder both cover in their QFT texts if I recall correctly. I'll check and let you know, if you're interested and want to read up on it after you've finished up the proof using your purely group theoretical method (I wish I could help you there but my formal knowledge of group theory is quite lacking T_T).

EDIT: OK I fixed the steps and made things clearer :)
 
Last edited:
  • #8
WannabeNewton said:
I see. By "my way" do you mean your method in post #1?
Yes.

WannabeNewton said:
Sorry I fudged a step there. I'll edit it.
I copied the wrong step. :smile: It was the step before the one I quoted that confused me. I understand the calculation now that I've seen your edit.

WannabeNewton said:
The translation part is easy since, in a global inertial coordinate system, the constant vector must have the form ##t^{\mu} = a^{\nu}(\partial_{\nu})^{\mu}## which is just a translation in an arbitrary direction in space-time. That ##M^{a}{}{}_{b}## corresponds to boosts and rotations takes more work. The stuff on the Poincare algebra and the overall connection to the Poincare group is something Srednicki and Peskin/Schroeder both cover in their QFT texts if I recall correctly. I'll check and let you know, if you're interested and want to read up on it after you've finished up the proof using your purely group theoretical method (I wish I could help you there but my formal knowledge of group theory is quite lacking T_T).
I am certainly interested in other approaches than my own. I will try to understand your approach better.

Note that the part that I'm having difficulties with in my approach is not group theoretical. I have reduced the problem to a calculus problem. The next step should be to prove that if F is a square-matrix valued function that satisfies ##F(x)^T\eta F(x)=\eta## for all ##x\in\mathbb R^4##, then F is constant. (Edit: This can't be right, not for an arbitrary F that satisfies what I said here. The fact that F is the Jacobian of a smooth map bijection must be used somehow). This will tell us that the Jacobian of ##\phi## is a constant, which means that all the first-order partial derivatives of ##\phi## are constant. This should imply that all the higher-order partial derivatives are 0.
 
Last edited:
  • #9
Fredrik said:
There are two exponential maps, one for Riemannian manifolds, and one for Lie groups. We are dealing with a Lorentzian (pseudo-Riemannian) manifold, so it looks like the former is undefined.

I'm pretty sure that there is also an exponential map that works for pseudo-Riemannian manifolds. Lee might not cover this, but wiki seems to hint that there might be such a generalization:

In Riemannian geometry, an exponential map is a map from a subset of a tangent space TpM of a Riemannian manifold (or pseudo-Riemannian manifold) M to M itself. The (pseudo) Riemannian metric determines a canonical affine connection, and the exponential map of the (pseudo) Riemannian manifold is given by the exponential map of this connection.
 
  • #10
micromass said:
I'm pretty sure that there is also an exponential map that works for pseudo-Riemannian manifolds. Lee might not cover this, but wiki seems to hint that there might be such a generalization:
I suppose it's only a matter of choosing a preferred parametrization of the null geodesics, so it sounds doable.

Can you tell me where I can read a full statement of the theorem that involves the formula you wrote as ##\varphi(exp(X)) = exp(\varphi_*(X))##?
 
  • #11
Fredrik said:
I suppose it's only a matter of choosing a preferred parametrization of the null geodesics, so it sounds doable.

Can you tell me where I can read a full statement of the theorem that involves the formula you wrote as ##\varphi(exp(X)) = exp(\varphi_*(X))##?

It's proposition 5.9 in Lee's Riemannian manifolds. But again, this only deals with a Riemannian manifold. The problem is that I don't know any good math texts that deal with pseudo-Riemannian structures...
 
  • #12
One possible aid could come from knowing the family of functions that satisfy the PDE

[tex]
(\partial_0\phi)^2 - \|\nabla\phi\|^2 = 1
[/tex]

I know that a function

[tex]
\phi(x_0,x) = x_0\cosh(c) + (u\cdot x)\sinh(c)
[/tex]

satisfies the PDE with constants [itex]u,c[/itex], where [itex]\|u\|=1[/itex], but what are the other solutions?

Assumption: The spatial space has a one dimension:

[tex]
\left(\begin{array}{cc}
\partial_0\phi_0 & \partial_0\phi_1 \\
\partial_1\phi_0 & \partial_1\phi_1 \\
\end{array}\right)
\left(\begin{array}{cc}
-1 & 0 \\ 0 & 1 \\
\end{array}\right)
\left(\begin{array}{cc}
\partial_0\phi_0 & \partial_1\phi_0 \\
\partial_0\phi_1 & \partial_1\phi_1 \\
\end{array}\right)
= \left(\begin{array}{cc}
-1 & 0 \\ 0 & 1 \\
\end{array}\right)
[/tex]

is equivalent to the equations

[tex]
-(\partial_0\phi_0)^2 + (\partial_0\phi_1)^2 = -1
[/tex]
[tex]
(-\partial_1\phi_0)^2 + (\partial_1\phi_1)^2 = 1
[/tex]
[tex]
-(\partial_0\phi_0)(\partial_1\phi_0) + (\partial_0\phi_1)(\partial_1\phi_1) = 0
[/tex]

Assuming I worked this right, finite amount of manipulation implies

[tex]
(\partial_0\phi_0)^2 - (\partial_1\phi_0)^2 = 1
[/tex]

which now only involves the function [itex]\phi_0[/itex].

It probably turns out that the non-linear solutions of this PDE imply some contradictions with the previous conditions.
 
  • #13
I've been looking at micromass' approch some more. It looks like that will work. I just have to consider the map ##\phi-\phi(0)## instead of ##\phi## itself. And of course, I have to refresh my memory about connections, geodesics, etc.

Lee's intro to the exponential map ends with the following comment: "We note in passing that the results of this section apply with only minor changes to pseudo-Riemannian metrics, or indeed to any linear connection." But he doesn't say that the minor changes are.

I've been thinking about my own approach, and I think I have an idea about what I did years ago, when I thought that I had proved this. It's not a rigorous argument in the form I'm presenting it here, but maybe it can be made rigorous.

If we can write ##\phi## (which is smooth) as a power series,
$$\phi(x)=\phi(0)+x^\mu \phi,_{\mu}(0) +\frac{1}{2}x^\mu x^\nu\phi,_{\mu\nu}(0)+\cdots,$$ then we have
$$\phi^\alpha{},_\beta(x) =\phi^\alpha,_\beta(0)+\frac{1}{2} x^\nu \phi^\alpha,_{\beta\nu}+\frac{1}{2}x^\mu\phi^\alpha,_{\mu\beta}+\cdots $$ If we insert this into
$$\eta_{\beta\delta}=\phi^\alpha,_\beta(x) \eta_{\alpha\gamma} \phi^\gamma,_\delta(x),$$ we get an equality between power series, with no components of x on the left, and lots of components of x on the right. I'm not sure what exactly the rule is here. I think the coefficient of each product of components of x must match the corresponding coefficient on the other side. But on the left, almost all of them are zero. This seems to imply that all the second- and higher-order derivatives of ##\phi## at 0 are 0.
 
  • #14
Hey Fredrik check out pages 17 and 18 of Srednicki's QFT text. He'll talk about the Poincare algebra and how to relate the matrix ##M^{ab}## I wrote above to boosts and rotations.
 
  • #15
Fredrik said:
If we can write ##\phi## (which is smooth) as a power series,
$$\phi(x)=\phi(0)+x^\mu \phi,_{\mu}(0) +\frac{1}{2}x^\mu x^\nu\phi,_{\mu\nu}(0)+\cdots,$$ then we have
$$\phi^\alpha{},_\beta(x) =\phi^\alpha,_\beta(0)+\frac{1}{2} x^\nu \phi^\alpha,_{\beta\nu}+\frac{1}{2}x^\mu\phi^\alpha,_{\mu\beta}+\cdots $$ If we insert this into
$$\eta_{\beta\delta}=\phi^\alpha,_\beta(x) \eta_{\alpha\gamma} \phi^\gamma,_\delta(x),$$ we get an equality between power series, with no components of x on the left, and lots of components of x on the right. I'm not sure what exactly the rule is here. I think the coefficient of each product of components of x must match the corresponding coefficient on the other side. But on the left, almost all of them are zero. This seems to imply that all the second- and higher-order derivatives of ##\phi## at 0 are 0.

This is nice, but it of course only works for analytic functions. Maybe we can use some kind of density argument to show it holds more general as well?
 
  • #16
WannabeNewton said:
Hey Fredrik check out pages 17 and 18 of Srednicki's QFT text. He'll talk about the Poincare algebra and how to relate the matrix ##M^{ab}## I wrote above to boosts and rotations.
Thanks for the tip. That's actually a calculation I've done many times, while studying Weinberg's chapter 2. It shows that the generators of the Poincaré group (in a unitary representation of the Poincaré group on a complex Hilbert space) satisfy the commutation relations of the Poincaré algebra.

I haven't had time to refresh my memory about killing vectors and stuff, but it seems to me that the biggest difficulty in your approach is to show that the independent components of your ##M^{ab}## are generators of rotations and boosts.
 
  • #17
Jostpuur, I'm looking at your idea too, trying to see what I can do with it. I think you mixed up the rows and columns of the Jacobian (or maybe there are different conventions), but it doesn't matter for the end result.
 
  • #18
Jostpuur, I've been looking some more at your approach. I don't see a way to use it to solve the problem. Here's what I'm thinking: In the 1+1-dimensional case, your approach gives us a way to find some solutions to the differential equation. The solutions we find are these:
$$\phi(x)=\begin{pmatrix}x^0\cosh v+x^1\sinh v\\ x^0\sinh v+x^1\cosh v\end{pmatrix} =\begin{pmatrix}\cosh v & \sinh v\\ \sinh v & \cosh v\end{pmatrix}\begin{pmatrix}x^0\\ x^1\end{pmatrix}.$$ v is an arbitrary real number. What we have done here is just to note that if ##\phi## is a Lorentz transformation, then the equation holds. In the 3+1 dimensional case, this corresponds to noticing that the equation holds when ##\phi## is linear and such that that (its matrix representation in the standard basis satisfies) ##\phi^T\eta\phi=\eta##.

It's quite easy to show that if there's a Lorentz transformation ##\Lambda## and an ##a\in\mathbb R^4## such that ##\phi(x)=\Lambda x+a## for all ##x\in\mathbb R^4##, then ##J_\phi(x)^T\eta J_\phi(x)=\eta## for all ##x\in\mathbb R^4##. The difficult part is to show ##\phi## must be like this. My power series argument (post #13) rules out all other polynomials and power series, but the possibility of non-analytic smooth solutions remains. I wonder if it's possible to prove that ##\phi## must be analytic.
 
Last edited:
  • #19
The original problem is of such kind that it interests me too, but I've been unable to understand anything about the differentiabl geometry related stuff in this thread :frown:

Once the problem becomes solved, perhaps this will function as a motivation for those differential geometric methods? I'll be waiting eagerly a summary of this all...

Fredrik said:
My power series argument (post #13) rules out all other polynomials and power series, but the possibility of non-analytic smooth solutions remains.

The series did not convince me, since they appeared to complicated. I would attempt a similar thing more rigorously by differentiating the boths sides of

[tex]
J_{\phi}(x)^T\eta J_{\phi}(x) = \eta
[/tex]

and obtain

[tex]
\Big(\frac{\partial}{\partial x_i} J_{\phi}(x)^T\Big)\eta J_{\phi}(x) + J_{\phi}(x)^T\eta \Big(\frac{\partial}{\partial x_i} J_{\phi}(x)\Big) = 0
[/tex]

Here the derivatives are matrices, where each element has been operated with [itex]\frac{\partial}{\partial x_i}[/itex]. It would be very nice if this formula would somehow imply [itex]\frac{\partial}{\partial x_i} J_{\phi}(x)=0[/itex], but I don't see how to manipulate towards that.
 
  • #20
Fredrik said:
I haven't had time to refresh my memory about killing vectors and stuff, but it seems to me that the biggest difficulty in your approach is to show that the independent components of your ##M^{ab}## are generators of rotations and boosts.
Well a less fun but extremely fast way would be to go the other way and show that the 3 basic boosts, the 3 basic rotations, and the 4 basic translations are in fact killing vector fields of Minkowski space-time and then use the fact that a manifold can have at most ##\frac{1}{2}n(n + 1)## linearly independent killing vector fields to conclude that these are the only killing vector fields of Minkowski space-time (it is a maximally symmetric manifold). Then all the one-parameter group of isometries of Minkowski space-time (which are in association with these 10 linearly independent killing vector fields) can be related to the proper Poincare group.
 
  • #21
jostpuur said:
I would attempt a similar thing more rigorously by differentiating the boths sides of

[tex]
J_{\phi}(x)^T\eta J_{\phi}(x) = \eta
[/tex]

and obtain

[tex]
\Big(\frac{\partial}{\partial x_i} J_{\phi}(x)^T\Big)\eta J_{\phi}(x) + J_{\phi}(x)^T\eta \Big(\frac{\partial}{\partial x_i} J_{\phi}(x)\Big) = 0
[/tex]

Here the derivatives are matrices, where each element has been operated with [itex]\frac{\partial}{\partial x_i}[/itex]. It would be very nice if this formula would somehow imply [itex]\frac{\partial}{\partial x_i} J_{\phi}(x)=0[/itex], but I don't see how to manipulate towards that.
I tried that too. Didn't see a way to make it work.
I have already typed up the part of the proof that shows that ##\phi## is an isometry if and only if ##J_\phi(x)^T\eta J_\phi(x)=\eta## for all x, for my own notes, so I might as well post it here, if anyone is interested. I think it's a really nice exercise in how to use pushforwards and pullbacks. But if there's no way to finish this, e.g. by proving that ##\phi## must be analytic, then I will have to abandon this and use micromass' approach instead.

Let (M,g) be Minkowski spacetime (with ##M=\mathbb R^4##). Let ##\phi:M\to M## be an arbitrary smooth map. Let ##I## be the identity map on ##\mathbb R^4##. We will be using it as a coordinate system. The following statements are clearly equivalent.

(a) ##\phi## is an isometry.
(b) ##\phi^*g=g##.
(c) For all ##x\in M## and all ##u,v\in T_xM##, we have ##(\phi^*g)_x(u,v)=g_x(u,v).##
(d) For all ##x\in M##, and all ##\mu,\nu\in\{0,1,2,3\}##, we have ##(\phi^*g)_x(\partial_\mu|_x,\partial_\nu|_x)=\eta_{\mu\nu}##.

The left-hand side of the equality in (d) is
$$(\phi^*g)_x(\partial_\mu|_x,\partial_\nu|_x)=\phi^* g_{\phi(x)}(\partial_\mu|_x,\partial_\nu|_x) =g_{\phi(x)}(\phi_*\partial_\mu|_x,\phi_*\partial_\nu|_x).$$ For all ##\rho\in\{0,1,2,3\}##, we have
\begin{align}
\phi_* \partial_\mu|_x(I^\rho) =\partial_\mu|_x(I^\rho\circ\phi) =\partial_\mu|_x(\phi^\rho) =(\phi^\rho\circ I^{-1})_{,\mu}(I(x)) =\phi^\rho{}_{,\mu}(x).
\end{align}
This implies that ##\phi_*\partial_\mu|_x=\phi^\rho{}_{,\mu}(x)\partial_\mu|_{\phi(x)}##. We can obviously derive a similar result for ##\phi_*\partial_\nu|_x## in the same way. So the left-hand side of the equality in (d) is equal to
\begin{align}
g_{\phi(x)}\left(\phi^\rho{}_{,\mu}(x)\partial_\mu|_{\phi(x)}, \phi^\sigma{}_{,\nu}(x)\partial_\nu|_{\phi(x)}\right) =\phi^\rho{}_{,\mu}(x)\eta_{\rho\sigma} \phi^\sigma{}_{,\nu}(x)
\end{align} So the equality in (d) is equivalent to
\begin{align}
\phi^\rho{}_{,\mu}(x)\eta_{\rho\sigma} \phi^\sigma{}_{,\nu}(x) =\eta_{\mu\nu}.
\end{align} This is the ##\mu,\nu## component of the matrix equation
\begin{align}
J_\phi(x)^T\eta J_\phi(x) =\eta,
\end{align} where ##J_\phi(x)## is the Jacobian matrix of ##\phi## at x. So (d) is telling us exactly that this matrix equation holds for all ##x\in\mathbb R^4##.
 
Last edited:
  • #22
micromass said:
A theorem of differential geometry now states that

[tex]\varphi(exp(X)) = exp(\varphi_*(X))[/tex]

So since the exponential map is an identity

So we are using some advanced and strong theorem to prove a simpler special case?

It is probably inapropriate to ask for the proof of this advanced theorem to this thread, since the proof can be found in some books (and probably is long), but could it be reasonable, that the ideas of this proof get discussed in this special case setting?

It seems that in linear spaces isometries are linear, and the mentioned theorem then generalizes the result into more general Lie groups? Proving the original problem with this theorem seems to be like shooting a fly with a cannon.
 
  • #23
If a mapping [itex]\phi:\mathbb{R}^N\to\mathbb{R}^N[/itex] satisfies the property [itex]\|\phi(x)-\phi(y)\|=\|x-y\|[/itex] for all [itex]x,y\in\mathbb{R}^N[/itex], how do you prove that [itex]\phi(x)=Ax+a[/itex] with some [itex]A\in\textrm{O}(N)[/itex] and [itex]a\in\mathbb{R}^N[/itex]?
 
  • #24
jostpuur said:
If a mapping [itex]\phi:\mathbb{R}^N\to\mathbb{R}^N[/itex] satisfies the property [itex]\|\phi(x)-\phi(y)\|=\|x-y\|[/itex] for all [itex]x,y\in\mathbb{R}^N[/itex], how do you prove that [itex]\phi(x)=Ax+a[/itex] with some [itex]A\in\textrm{O}(N)[/itex] and [itex]a\in\mathbb{R}^N[/itex]?

If I assume that [itex]\phi[/itex] is differentiable, then

[tex]
\phi(x)-\phi(y) = (\nabla\phi(y))\cdot(x-y) + o(\|x-y\|)
[/tex]

holds when [itex]x,y[/itex] are close to each other. This implies

[tex]
(x-y)^T(x-y) = (x-y)^T (\nabla\phi(y))^T(\nabla\phi(y))(x-y) + o(\|x-y\|^2)
[/tex]

Since [itex]x,y[/itex] arbitrary, this implies

[tex]
\textrm{id} = (\nabla\phi(y))^T (\nabla\phi(y))
[/tex]

A very similar condition as in the relativity motivated problem.
 
  • #25
jostpuur said:
If a mapping [itex]\phi:\mathbb{R}^N\to\mathbb{R}^N[/itex] satisfies the property [itex]\|\phi(x)-\phi(y)\|=\|x-y\|[/itex] for all [itex]x,y\in\mathbb{R}^N[/itex], how do you prove that [itex]\phi(x)=Ax+a[/itex] with some [itex]A\in\textrm{O}(N)[/itex] and [itex]a\in\mathbb{R}^N[/itex]?
See the pdf linked to at the end of this Wikipedia page.

http://en.wikipedia.org/wiki/Mazur-Ulam_theorem

I asked a similar question myself some time ago, and got this answer from micromass. (Thanks again micro).
 
  • #26
Fredrik said:
See the pdf linked to at the end of this Wikipedia page.

http://en.wikipedia.org/wiki/Mazur-Ulam_theorem

I asked a similar question myself some time ago, and got this answer from micromass. (Thanks again micro).

(Just like the pdf file, I'm from Helsinki too :wink:)

hm hm well next. Here's one interesting question. If I know that a mapping [itex]\phi:\mathbb{R}^N\to\mathbb{R}^N[/itex] is differentiable, and satisfies [itex]\textrm{id}=(\nabla\phi(x))^T(\nabla\phi(x))[/itex] how do you prove that [itex]\|\phi(x)-\phi(y)\|=\|x-y\|[/itex] holds too?

If some technique is found for this purpose, a similar technique could be used to obtain something in the Minkowski case too. Then, similar techniques that were used in Mazur-Ulam theorem, could be used again.
 
  • #27
  • #28
Fredrik said:
I thought I studied and understood the proof back then, but I can't make sense of the proof now. So I have started a thread about that too.

https://www.physicsforums.com/showthread.php?t=697380

I see. And it turned out that in Hilbert spaces stuff was lot simpler. But I guess the stronger Mazur-Ulam didn't do any bad.

IMO the original problem, which deals with the Jacobians, still remains mysterious. (Although differential geometry believers probably consider it solved...)

I tried to solve the problem I posed my previous post, and got stuck again.

Assume that [itex]\phi:\mathbb{R}^N\to\mathbb{R}^N[/itex] is continuously differentiable, and satisfies

[tex]
(\nabla\phi(x))^T(\nabla\phi(x))=\textrm{id}
[/tex]

at all points. Then

[tex]
\phi(x) - \phi(y) = \int\limits_0^1 dt\; \big(\nabla\phi((1-t)x+ty)\big)\cdot (y-x)
[/tex]

which implies

[tex]
\|\phi(x)-\phi(y)\|^2 = (\phi(x) - \phi(y))^T (\phi(x) - \phi(y))
[/tex]
[tex]
= \int\limits_0^1 dt\;\int\limits_0^1 dt'\; (y-x)^T
\big(\nabla\phi((1-t')x+t'y)\big)^T \big(\nabla\phi((1-t)x+ty)\big)(y-x)
[/tex]

Stuck there because the Jacobians are at different points, so it didn't work. How do you proceed towards the result

[tex]
\cdots = \|x-y\|^2
[/tex]

?
 
  • #29
Continuity of the derivative should be enough to imply the approximation

[tex]
\|\phi(x)-\phi(y)\|^2 = \|x-y\|^2 + o(\|x-y\|^2)
[/tex]

in the limit [itex]\|x-y\|\to 0[/itex]. Perhaps this infinitesimal isometry property can be extended some how.
 
  • #30
jostpuur said:
Continuity of the derivative should be enough to imply the approximation

[tex]
\|\phi(x)-\phi(y)\|^2 = \|x-y\|^2 + o(\|x-y\|^2)
[/tex]

in the limit [itex]\|x-y\|\to 0[/itex]. Perhaps this infinitesimal isometry property can be extended some how.

This condition seems to imply

[tex]
\|\phi(x)-\phi(y)\|\leq \|x-y\|
[/tex]

for all [itex]x,y[/itex].
 
  • #31
With some assumptions the inverse [itex]\phi^{-1}[/itex] can be assumed differentiable too. Then

[tex]
(\nabla\phi)^T(\nabla\phi)=\textrm{id}
[/tex]

will imply

[tex]
\textrm{id} = (\nabla\phi^{-1})^T(\nabla\phi^{-1})
[/tex]

I think the necessary pieces of puzzle are starting to be around now...
 
  • #32
jostpuur said:
This condition seems to imply

[tex]
\|\phi(x)-\phi(y)\|\leq \|x-y\|
[/tex]

for all [itex]x,y[/itex].

In proof of this I used a triangle inequality that will not hold in Minkowski space. Damned thing...

Anyway, I would like to point out that the proof that micromass wrote in Hilbert spaces will also work in Minkowski type spaces. Assume that [itex]\eta[/itex] is some [itex]N\times N[/itex] symmetric matrix such that zero is not its eigenvalue (but the eigenvalues can be both positive and negative). If [itex]\phi:\mathbb{R}^N\to\mathbb{R}^N[/itex] is a bijection that satisfies

[tex]
\big(\phi(x)-\phi(y)\big)^T\eta \big(\phi(x)-\phi(y)\big) = (x-y)^T\eta (x-y),
[/tex]

then the [itex]\phi[/itex] will be an affine mapping [itex]\phi(x)=Ax+a[/itex]. At least I didn't see anything that wouldn't work in these steps when they are modified: https://www.physicsforums.com/showpost.php?p=4417877&postcount=4
 
Last edited:
  • #33
jostpuur said:
Continuity of the derivative should be enough to imply the approximation

[tex]
\|\phi(x)-\phi(y)\|^2 = \|x-y\|^2 + o(\|x-y\|^2)
[/tex]

in the limit [itex]\|x-y\|\to 0[/itex]. Perhaps this infinitesimal isometry property can be extended some how.

I was doing things complicatedly here. The continuity of derivative is not needed. The result also generalizes.

Assume that [itex]\eta[/itex] is some [itex]N\times N[/itex] symmetric matrix. If [itex]\phi:\mathbb{R}^N\to\mathbb{R}^N[/itex] is a differentiable mapping such that

[tex]
(\nabla\phi(x))^T\eta (\nabla\phi(x)) = \eta
[/tex]

for all [itex]x\in\mathbb{R}^N[/itex], then

[tex]
\big(\phi(x)-\phi(y)\big)^T\eta\big(\phi(x)-\phi(y)\big) = (x-y)^T\eta (x-y) + o(\|x-y\|^2)
[/tex]

in the limit [itex]\|x-y\|\to 0[/itex] (can be the Euclidian norm here, but it shouldn't matter). (To be rigorous, I think the limit must be interpreted so that other one of the parameters is constant in the limit.)

So the final critical gap is that how do you extend the infinitesimal isometry property to the whole space? That's something I didn't get yet.
 
Last edited:
  • #34
Fredrik said:
Maybe I should have mentioned what I'm really trying to do. I want to prove rigorously that the isometry group of Minkowski spacetime (defined as ##\mathbb R^4## with the Minkowski metric) is the Poincaré group, defined as the set of all affine maps ##x\mapsto \Lambda x+a## such that ##\Lambda^T\eta\Lambda=\eta##.

So this problem became solved in the other Mazur-Ulamn thread.

But this discussion also introduced a new problem which is that if the mapping has an infinitesimal isometry property, then the isometry property extends to finite regions and possibly the whole space. Did this become solved too? I didn't understand how it happened.

The problem I asked about came up after I proved that ##\phi## is an isometry if and only if its Jacobian ##J_\phi(x)## satisfies ##J_\phi(x)^T\eta J_\phi(x)=\eta## for all x.

This sounds like the precise problem I got stuck with. Are you sure you accomplished this?
 
  • #35
jostpuur said:
Are you sure you accomplished this?
Yes, but keep in mind that I was talking about isometries in the sense of differential geometry, not in the sense of normed vector spaces. I included the proof in post #21.

jostpuur said:
So this problem became solved in the other Mazur-Ulamn thread.
Unfortunately no. Micromass' suggestion to use spacetime's exponential map is still the most promising approach. I haven't verified the details yet. I still don't even know how to define the exponential map of a Lorentzian manifold like Minkowski spacetime.

The technique that micromass suggested for Hilbert spaces works for Minkowski spacetime as well. It can be used to prove things like this: If ##\Lambda:\mathbb R^4\to\mathbb R^4## is a surjective map such that ##g(\Lambda(x),\Lambda(y))=g(x,y)## for all ##x,y\in\mathbb R^4##, then ##\Lambda## is linear and its matrix representation with respect to the standard basis satisfies ##\Lambda^T\eta\Lambda=\eta##. But that's not the problem I was having difficulties with.

If I could prove that "##\phi## is an isometry" implies something like the assumption in the theorem I just stated, then I'm pretty much done. Unfortunately all I was able to prove by using the definitions in a straightforward way was that the Jacobian satisfies a condition like the one I want the function itself to satisfy.

jostpuur said:
But this discussion also introduced a new problem which is that if the mapping has an infinitesimal isometry property, then the isometry property extends to finite regions and possibly the whole space. Did this become solved too? I didn't understand how it happened.
I don't understand what you're referring to here, or what an infinitesimal isometry property is. Hm, "infinitesimal" often refers to things related to the tangent space. So maybe you meant something like an isometry in the sense of normed vector spaces, i.e. a condition like ##g(\Lambda(x)-\Lambda(y),\Lambda(x)-\Lambda(y))=g(x-y,x-y)##? The problem I asked about in #1 doesn't start with a condition like that. It starts with the condition that ##\phi:\mathbb R^4\to\mathbb R^4## is such that ##\phi^*g=g##, where ##\phi^*## denotes the pullback map. It's defined by ##\phi^*g_{\phi(p)}(u,v)=g_p(\phi_*u,\phi_*v)##, where ##\phi_*## is the pushforward map, defined by ##\phi_*v(f)=v(f\circ\phi)##. I'm using all these definitions in post #21.
 

Similar threads

Replies
3
Views
1K
Replies
3
Views
2K
Replies
1
Views
1K
Replies
5
Views
2K
Replies
7
Views
2K
Replies
1
Views
653
Replies
0
Views
553
Back
Top