- #1
tomdodd4598
- 138
- 13
- TL;DR Summary
- Susskind makes an argument for the existence of a coordinate transformation which can transform the metric tensor into that of the flat metric with vanishing derivatives at a particular point, but I do not understand it...
Hey there,
I've been recently been going back over the basics of GR, differential geometry in particular. I was watching one of Susskind's lectures and did not understand the argument made here (26:33 - 35:40).
In short, the argument goes as follows (I think): we have some generic metric ##{ g }_{ m n }^{ ' }\left( y \right)##. Suppose we have a coordinate transformation that takes ##{ g }_{ m n }^{ ' }\left( y\right) \rightarrow { g }_{ m n }\left( x\right)## such that ##{ g }_{ m n }\left( X\right) ={ \delta }_{ m n }## for a particular point ##x=X##.
Susskind wants to show that, in general, the first derivatives, ##{ \partial }_{ r }{ g }_{ m n }\left( x\right)##, can be chosen to be zero, but the second derivatives, ##{ \partial }_{ r }{ \partial }_{ s }{ g }_{ \mu \nu }\left( x \right) ##, can not (that is, at ##x=X##).
He does this by looking at the expansion of ##x## in terms of ##y## about the point ##x=X##. For simplicity, he chooses ##{ X }^{ m }=0## and for the ##x## and ##y## coordinate systems to have the same origin: $${ x }^{ m }={ a }_{ r }^{ m }{ y }^{ r }+{ b }_{ rs }^{ m }{ y }^{ r }{ y }^{ s }+{ c }_{ rst }^{ m }{ y }^{ r }{ y }^{ s }{ y }^{ t }+\dots$$
The argument is (again, I think) that because (for the case of a four-dimensional space) ##{ \partial }_{ r }{ g }_{ m n }\left( x\right)=0## is 40 equations and ##{ b }_{ rs }^{ m }## consists of 40 variables, we can always choose values of ##{ b }_{ rs }^{ m }## that satisfy the equations. Meanwhile, ##{ \partial }_{ r }{ \partial }_{ s }{ g }_{ \mu \nu }\left( x \right) =0## is 160 equations, but ##{ c }_{ rst }^{ m }## consists of only 80 variables, so we do not have enough free parameters to force the second derivatives to all vanish.
The problem is that I simply don't see why the existence of 40 variables in that expansion means that we can satisfy the 40 equations. Is the connection a simple one or do I just have to do something like grind out the values of the derivatives at ##x=X## using the series expansion?
Thanks in advance!
I've been recently been going back over the basics of GR, differential geometry in particular. I was watching one of Susskind's lectures and did not understand the argument made here (26:33 - 35:40).
In short, the argument goes as follows (I think): we have some generic metric ##{ g }_{ m n }^{ ' }\left( y \right)##. Suppose we have a coordinate transformation that takes ##{ g }_{ m n }^{ ' }\left( y\right) \rightarrow { g }_{ m n }\left( x\right)## such that ##{ g }_{ m n }\left( X\right) ={ \delta }_{ m n }## for a particular point ##x=X##.
Susskind wants to show that, in general, the first derivatives, ##{ \partial }_{ r }{ g }_{ m n }\left( x\right)##, can be chosen to be zero, but the second derivatives, ##{ \partial }_{ r }{ \partial }_{ s }{ g }_{ \mu \nu }\left( x \right) ##, can not (that is, at ##x=X##).
He does this by looking at the expansion of ##x## in terms of ##y## about the point ##x=X##. For simplicity, he chooses ##{ X }^{ m }=0## and for the ##x## and ##y## coordinate systems to have the same origin: $${ x }^{ m }={ a }_{ r }^{ m }{ y }^{ r }+{ b }_{ rs }^{ m }{ y }^{ r }{ y }^{ s }+{ c }_{ rst }^{ m }{ y }^{ r }{ y }^{ s }{ y }^{ t }+\dots$$
The argument is (again, I think) that because (for the case of a four-dimensional space) ##{ \partial }_{ r }{ g }_{ m n }\left( x\right)=0## is 40 equations and ##{ b }_{ rs }^{ m }## consists of 40 variables, we can always choose values of ##{ b }_{ rs }^{ m }## that satisfy the equations. Meanwhile, ##{ \partial }_{ r }{ \partial }_{ s }{ g }_{ \mu \nu }\left( x \right) =0## is 160 equations, but ##{ c }_{ rst }^{ m }## consists of only 80 variables, so we do not have enough free parameters to force the second derivatives to all vanish.
The problem is that I simply don't see why the existence of 40 variables in that expansion means that we can satisfy the 40 equations. Is the connection a simple one or do I just have to do something like grind out the values of the derivatives at ##x=X## using the series expansion?
Thanks in advance!
Last edited: