- #1
LAncienne
- 20
- 0
I'm getting myself up to speed on GR to try to understand a book by John Moffat called "Reinventing Gravity...". So far I've been using Sean Carroll's sort of classic course notes and a fair bit of Wikipediea (sp?). It may be a naive question, but the point I wouldn't mind comments on is the following: Often in writeups on GR, some sort of comment is made the the metric tensors describes the geometry of the GR (semi-Riemannian) manifold. Then, however, the Riemann tensor is derived and it indicated as containing ALL the information about the geometry of the manifold. However the contraction of the Riemannian tensor, the Ricci tensor, is actually used in the Einstein GR equation.
The metric is symmetric hence, in component form, it has ten independent components. The Riemann tensor components are constructed from, after all is said and done, combinations of products of the metric tensor components and their partial derivatives. It has 20 independent components, i.e., we have extracted additional information from the metric by the operations performed on it to get the Riemann tensor. We then contract the Riemann tensor to get the Ricci tensor, symmetric again, hence ten independent components. So we gain some information, then lose or transform some of it in going from the metric to the Ricci tensor. One specific question is, what information is no longer available upon the contraction, and why is it (whatever it may be) not important?
I have a few other musings about how much information content can, in some sense, be attributed (contained) in an operator or function (like these tensors) for the domains that they are relevant for, and (when I finally read Einstein's paper), why he contracted the Riemann tensor, supposedly the ne plus ultra. (One suspects that the stuff he had to work with (the stress-energy tensor) was a (0,2) tensor and he was stucK.
The metric is symmetric hence, in component form, it has ten independent components. The Riemann tensor components are constructed from, after all is said and done, combinations of products of the metric tensor components and their partial derivatives. It has 20 independent components, i.e., we have extracted additional information from the metric by the operations performed on it to get the Riemann tensor. We then contract the Riemann tensor to get the Ricci tensor, symmetric again, hence ten independent components. So we gain some information, then lose or transform some of it in going from the metric to the Ricci tensor. One specific question is, what information is no longer available upon the contraction, and why is it (whatever it may be) not important?
I have a few other musings about how much information content can, in some sense, be attributed (contained) in an operator or function (like these tensors) for the domains that they are relevant for, and (when I finally read Einstein's paper), why he contracted the Riemann tensor, supposedly the ne plus ultra. (One suspects that the stuff he had to work with (the stress-energy tensor) was a (0,2) tensor and he was stucK.
Last edited: