Has Gravity Probe B been a waste of money?

In summary: However, if we look at it from a purely scientific standpoint, it seems like it was a worthwhile investment. If we take into account the potential ramifications of a false discovery, then it becomes a lot more important.
  • #1
Garth
Science Advisor
Gold Member
3,581
107
After three months in orbit, following its launch on 20 April, Gravity Probe B has now entered its science phase. This success comes after about 40 years of a long saga of planning and construction and three unsuccessful cancellations by NASA in ’89, ’93, and ‘95.
The project went through several external reviews of its scientific merit and technical readiness while its price tag kept on growing from around $130 million to $700 million.
The experiment has launched four almost perfectly spherical super-cooled gyroscopes into a polar orbit to test whether they precess according to the predictions of GR. It is measuring two key precessions, a north-south geodetic precession caused by the curvature of space-time and a frame-dragging or gravito-magnetic precession in an east-west direction.
Many cosmologists, such as Kenneth Nordtvedt, have said that the experiment was worth doing when it was first planned in the 1960s, but that today the result is a foregone conclusion. If so then GPB will have been a colossal waste of money that sapped funds from other more worthy programmes.
The subject of this thread is to question whether Kenneth Nordtvedt is correct or not. At the heart of the issue is whether GR and the concordance model of cosmology is so robust that it needs no further testing, or whether there are genuine grounds for questioning it. We have an opportunity to debate these issues before the results of the experiment become known in 2006 and unequivocally settle the matter one way or the other.
So the question is, “Has the GPB money been well spent or has it been a waste of money?”
 
Astronomy news on Phys.org
  • #2
Interesting question. I think it is worthwhile. The pioneer anomoly is still troubling and this may shed some light on it. Also, checking out the frame dragging prediction in GR is of considerable interest. It has not yet otherwise been tested with any precision AFAIK.
 
  • #3
Well, nothing in science is really a foregone conclusion. It's poor scientific practice to just assume the results of experiments. Although admittedly general relativity now has enough evidence to convince nearly anyone, it is entirely possible that we'll discover a new phenomenon in these very precise experiments.

- Warren
 
  • #4
Gravity probe B has not been a waste of money.If it confirms frame dragging then we
will know for sure that the fundamentals of GR are right.This means that problems such as anomalous star velocities in galaxies and also anaomalously high galaxy velocities in galaxy clusters must be explainable in terms of the presence of
additional mass such as dark matter.So resources can be focused on research into dark matter,dark energy etc.Once we know the fundamentals of GR are right we can start asking why we can't get quantum mechanics to "mix" with it.A positive result confirming frame dragging from gravity probe B will focus minds.
 
  • #5
Chronos said:
Interesting question. I think it is worthwhile. The pioneer anomoly is still troubling and this may shed some light on it. Also, checking out the frame dragging prediction in GR is of considerable interest. It has not yet otherwise been tested with any precision AFAIK.
I agree that it is worthwhile and that GR can benefit from the verification. On this same subject, I read somewhere that Kip Thorne thought the GBP project is still worthwhile science. He also said that most theorists expect GR to fail at some level, and predicted that the failure could happen in a surprising manner.

What will happen, though, if (for instance) the frame dragging effect causes the gyroscopes to deflect significantly more or less than predicted? Would physicists say "back to the drawing board with GR" or would they demand another very sensitive test (presumably by another method) as verification? That could get pretty expensive.
 
  • #6
I don't think they would get money for another test and I don't think they should.
A significantly different result than expected would mean that theorists have got a lot of work to do to justify another handout of cash.
 
  • #7
Of course checking any scientific theory has "value", but if you were allocating limited funds would you have given the money to GPB or to another project, say looking for life on Mars?
Kenneth Nordtvedt's point was that GPB was checking the Robertson parameter 'gamma', but that had now been evaluated to a high degree from timing radio pulses close to the Sun.
How valuable is the "value" of double checking GR, how might GR fall short and how significant would that be?
Garth
 
  • #8
Well, it's difficult to compare the use of funds across different branches of science. If you restrict your attention to physics alone, I believe GPB is a worthwhile physics experiment. If you want to consider astrobiology in the same breath, the water gets considerably muddier.

- Warren
 
  • #9
Garth said:
How valuable is the "value" of double checking GR, how might GR fall short and how significant would that be?
Garth
You're right in that respect. What's the cost/benefit ratio associated with what is assumed by many to be a trivial confirmation of GR? GR's problems (requiring dark matter to be put in by hand with very specific distributions and densities that are very dependent on the purposes it needs to perform in each circumstance) arise at galactic scales and galactic cluster scales. These are complex domains with high concentrations of matter. GR seems to work very well in simpler environs, like the Solar System, where the relevant masses are dense spherical objects (and a lot fewer of them), so I fully expect the probe's results to be confirmatory to a high degree of confidence. The double-check could result in a level of over-confidence in GR that is unwarranted given it's predictive performance at very large scales.
 
  • #10
Garth said:
After three months in orbit, following its launch on 20 April, Gravity Probe B has now entered its science phase. This success comes after about 40 years of a long saga of planning and construction and three unsuccessful cancellations by NASA in ’89, ’93, and ‘95.
The project went through several external reviews of its scientific merit and technical readiness while its price tag kept on growing from around $130 million to $700 million.
The experiment has launched four almost perfectly spherical super-cooled gyroscopes into a polar orbit to test whether they precess according to the predictions of GR. It is measuring two key precessions, a north-south geodetic precession caused by the curvature of space-time and a frame-dragging or gravito-magnetic precession in an east-west direction.
Many cosmologists, such as Kenneth Nordtvedt, have said that the experiment was worth doing when it was first planned in the 1960s, but that today the result is a foregone conclusion. If so then GPB will have been a colossal waste of money that sapped funds from other more worthy programmes.
The subject of this thread is to question whether Kenneth Nordtvedt is correct or not. At the heart of the issue is whether GR and the concordance model of cosmology is so robust that it needs no further testing, or whether there are genuine grounds for questioning it. We have an opportunity to debate these issues before the results of the experiment become known in 2006 and unequivocally settle the matter one way or the other.
So the question is, “Has the GPB money been well spent or has it been a waste of money?”
Well, the $$ has been spent, so the only way to answer the question is by defining - before starting to answer - how you would decide what constitutes 'well spent' and 'a waste of money' (and any other alternatives).

One approach is to 'second guess' the decision makers at each milestone of the project ... given what they knew at those times and NOT what we know now was the decision to continue reasonable? The only realistic answers to such questions must be couched in terms of the relative merits of competing proposals/uses for the $$ at the time.

Another - similar - approach is to ask, again at each milestone, whether the scientific objectives of the project could realistically have been attained more economically by spending the remaining $ on a different project.

These things are hard to stop - rarely is the case for stopping so overwhelming, and the further along a project gets, the more is invested in it (not just $$), and you just can't admit that all our troops' lives were in vain by pulling out, now can you? :-p
 
  • #11
Yes - my point is not really fiscal but by using the cash $ value I was trying to ascertain views on the scientific value of the experiment.
For me the value of GPB as against all other measurements to date is in the fact that all other tests of GR have essentially tested the whether the path of freely-falling particles and photons are geodesics of the vacuum GR field equation.
 
  • #12
A report in today's New Scientist "Neutron stars steal space probe's glory" (11 Sep p10) would seem to indicate that GPB has been upstaged and therefore its funding wasted.

In their paper "Measurement of gravitational spin-orbit coupling in a binary pulsar system",(http://arxiv.org/abs/astro-ph/0408457) Stairs, Thorsett and Arzoumanian confirm that the geodetic and gravitomagnetic precessions of the axes of the neutron stars in the binary pulsar system PSR B1534+12 are as described by GR.

However, as I have said before, deductions made from raw astronomical data are theory dependent, change the theory and those deductions may change too, so this does not necessarily imply that the precessions of the GPB gyroscopes must also be described by GR.

I believe Francis Everitt (GPB project director) was correct to insist that these observations of binary pulsars do not render the GPB experiment obsolete.

Gravitation may be best described by a non-metric, or semi-metric theory, such as that of SCC, in which highly relativistic particles, the degenerate material of these neutron stars, behave differently to ordinary matter. (see for example, "Experimental tests of the New Self Creation Cosmology and a heterodox prediction for Gravity Probe B", http://arxiv.org/abs/gr-qc/0302026 ) In this case the precessions of non-relativistic gyroscopes will differ from those of neutron stars.

This may yet be demonstrated by GPB.

- Garth
 
Last edited:
  • #13
Garth said:
Gravitation may be best described by a non-metric, or semi-metric theory, such as that of SCC, in which highly relativistic particles, the degenerate material of these neutron stars, behave differently to ordinary matter. (see for example, "Experimental tests of the New Self Creation Cosmology and a heterodox prediction for Gravity Probe B", http://arxiv.org/abs/gr-qc/0302026 ) In this case the precessions of non-relativistic gyroscopes will differ from those of neutron stars.

This may yet be demonstrated by GPB.

- Garth
If GPB shows the the precession effect predicted by SCC (5/6th of that in GR), I'll bet you stand a pretty good chance of getting that orbiting interferometer. :cool: It would be a WHOLE lot faster and cheaper to build than GPB was, too.

Regarding the Casimir Force experiment for (relatively) flat space-time, do you have a (non-technical please for this math-challenged guy :smile:) mechanism by which ZPE interacts with the gravitational field? Is this interesting wrinkle more properly seen as an artifact of doing the math in the JF(E) coordinate system with as-yet unknown mechanism? Certainly, it makes SCC falsifiable (although a trans-Jupiter probe=$$$$$$$$)!
 
  • #14
Garth said:
Yes - my point is not really fiscal but by using the cash $ value I was trying to ascertain views on the scientific value of the experiment.
For me the value of GPB as against all other measurements to date is in the fact that all other tests of GR have essentially tested the whether the path of freely-falling particles and photons are geodesics of the vacuum GR field equation.
I think this is *much* better question! :approve:

So, leaving aside filthy lucre, it seems to me important aspects of an evaluation might include:
- to what extent would a previously untested aspect of GR be tested?
- how important is GR within physics, cosmology, etc?
- looking ahead, how likely is it that any new aspects might be tested in other ways?
- are there any competing theories in this domain? If so, to what extent would {GPB} be able to discriminate among them?
 
  • #15
Garth said:
However, as I have said before, deductions made from raw astronomical data are theory dependent, change the theory and those deductions may change too, so this does not necessarily imply that the precessions of the GPB gyroscopes must also be described by GR.
When you previously wrote things like this I let it slide; however, having been alerted by zforgetaboutit - wrt the CMBR (she included a link to a paper by Tegmark) - I started to think more on this.

Would you please say more about what you mean here? For example, to what extent to you feel that exploration of the consistency of observational data with different (physical, cosmological) theories is limited by the data?
 
  • #16
turbo-1 said:
If GPB shows the the precession effect predicted by SCC (5/6th of that in GR), I'll bet you stand a pretty good chance of getting that orbiting interferometer. :cool: It would be a WHOLE lot faster and cheaper to build than GPB was, too.
Perhaps I missed something; isn't LISA just such an orbiting interferometer?
 
  • #17
Nereid said:
Perhaps I missed something; isn't LISA just such an orbiting interferometer?
Yes it is - I don't think it would detect the effect predicted by SCC (photons fall faster than particles), though. Garth's proposal was for an interferometer that would invert the light path every time the instrument orbited the Earth - an experiment specifically designed to test this effect. LISA might be able to test SCC, because the LISA array will tilted 30 degrees off perpendicular, making the beams pass through some gradient in the Sun's gravitational field almost all the time. The fact that the craft's orbits cause the triangular array to appear to rotate once every year would mean that the effect could not be "baselined out" at start-up and ignored. I just don't know if the effect (periodicity = one year) would rise above the noise of the system, though. LISA is designed to look for transient events, and the crafts' positioning relative to one another will not be rigidly fixed.
 
  • #18
turbo-1 said:
I'll bet you stand a pretty good chance of getting that orbiting interferometer. It would be a WHOLE lot faster and cheaper to build than GPB was, too.

The first step would be to modify a LIGO interferometer by truncating one of its beams and sending it straight back, the Sun will do the rest. – And that would be even cheaper!

turbo-1 said:
Regarding the Casimir Force experiment for (relatively) flat space-time, do you have a (non-technical please for this math-challenged guy :smile:) mechanism by which ZPE interacts with the gravitational field? Is this interesting wrinkle more properly seen as an artifact of doing the math in the JF(E) coordinate system with as-yet unknown mechanism? Certainly, it makes SCC falsifiable (although a trans-Jupiter probe=$$$$$$$$)!

Actually the experiment need not be too expensive, it could be miniaturized and hitch a lift on another deep space probe such as one to Saturn or the outer solar system (Pluto Express?)

A quick explanation of this aspect of SCC, the details are in the published papers. Two of its principles, Mach and the Local Conservation of Energy yield two solutions of the gravitational field around a static, spherical mass. These converge when r tends to infinity, but slightly diverge in the presence of curvature. This is because the 'Casimir-force' virtual electro-magnetic field contains energy but is not coupled to the Machian scalar field. The harmonisation of these two solutions requires the vacuum to have a small density. In a ‘hand-waving’ explanation: curvature “tries to force the two solutions apart”, but the requirement for consistency between them “draws” energy from the false vacuum, which then becomes observable. This is made up of contributions of zero-point energy from every quantum matter field, which has a natural re-normalised ‘cut-off’ Emax determined, and therefore limited, by the harmonisation of these solutions.

Nereid said:
So, leaving aside filthy lucre, it seems to me important aspects of an evaluation might include:
- to what extent would a previously untested aspect of GR be tested?
- how important is GR within physics, cosmology, etc?
- looking ahead, how likely is it that any new aspects might be tested in other ways?
- are there any competing theories in this domain? If so, to what extent would {GPB} be able to discriminate among them?

1. I think there are three possibilities: i. GPB behaves exactly as predicted by GR, ii. there is a slight deviation at the one part in 10^(3 or4) level that will open the way to modify GR to allow integration with quantum gravity, or iii. it behaves unexpectedly. As I have said before all tests to date have essentially tested the GR vacuum field equation, and asked, ‘do particles/photons travel on geodesics/null geodesics’? GPB does not – although now the double pulsar PSR B1534+12 also provides an alternative test.
2. I think you know how important GR is within physics and astronomy, the fact that it cannot be reconciled with quantum gravity speaks of both being as yet incomplete, but any replacement must reproduce GR’s successes and therefore reduce to GR in some respect, e.g. GR being its first approximation etc.
3. Looking ahead from SCC’s point of view the crucial question will be to test the equivalence principle by comparing “how particles and photons fall”. (See my thread on the subject). [The experiment would measure how far a horizontal beam of light bends towards a gravitating body as suggested above –and below]

Nereid said:
When you previously wrote things like this I let it slide; however, having been alerted by zforgetaboutit - wrt the CMBR (she included a link to a paper by Tegmark) - I started to think more on this.

“I let it slide” – I am not sure what you mean, is my statement “deductions made from raw astronomical data are theory dependent, change the theory and those deductions may change too “ not self-evident?

Nereid said:
Would you please say more about what you mean here? For example, to what extent to you feel that exploration of the consistency of observational data with different (physical, cosmological) theories is limited by the data?
For example if gravitation is adequately described by GR then the observation that space-time is flat means the total density parameter is unity. But in BD part of that density is scalar field energy and in SCC space-time flatness means a total density parameter of one third. Thus the conclusion about how much Dark Matter and Energy is out there depends on which gravitational theory you use to analyse the data with.

As far as the binary pulsars are concerned in SCC; as they are neutron stars their internal matter is relativistic and decoupled from the SCC scalar field force. They therefore behave exactly as in GR.

In order to answer your query about the LISA interferometer, which will not detect the difference between GR & SCC, I need first to repeat one property of SCC. In the theory test particles and photons travel on the geodesics of GR. [the presence of the BD type scalar field perturbs space-time but the SCC scalar field force on particles exactly compensates for this] Therefore in SCC the LISA beams behave exactly as in GR. However in my modified LIGO apparatus where the beam is perturbed by the Sun’s gravitational field, or my ‘space interferometer’, in which one half of a split beam is sent around a circular race-track of mirrors for 2km and re-combined with the other split-beam that has traversed just a metre or two, the beam is being compared with physical mass of the apparatus. The SCC deflection of the beam relative to the apparatus towards the gravitating body is tiny, about 1Angstrom, but detectable by the interferometer, of course the GR deflection is null.
 
Last edited:
  • #19
Garth said:
A quick explanation of this aspect of SCC, the details are in the published papers. Two of its principles, Mach and the Local Conservation of Energy yield two solutions of the gravitational field around a static, spherical mass. These converge when r tends to infinity, but slightly diverge in the presence of curvature. This is because the 'Casimir-force' virtual electro-magnetic field contains energy but is not coupled to the Machian scalar field. The harmonisation of these two solutions requires the vacuum to have a small density. In a ‘hand-waving’ explanation: curvature “tries to force the two solutions apart”, but the requirement for consistency between them “draws” energy from the false vacuum, which then becomes observable. This is made up of contributions of zero-point energy from every quantum matter field, which has a natural re-normalised ‘cut-off’ Emax determined, and therefore limited, by the harmonisation of these solutions.
Thank you for the very illuminating explanation! I have been wondering for some time about how the ZPE fields can be affected by curvature, and what contributions of these fields can make to the properties of matter embedded in them.

https://www.physicsforums.com/showthread.php?t=37724
https://www.physicsforums.com/showthread.php?t=28868

I am hampered by inadequate math, however and have been mining Citebase for papers that I can understand well enough to connect the dots. Your explanation has been more valuable than months of digging.

If SCC is correct, and ZPE is proportional to the difference between the Mach and LCE solutions (which diverge with increasing curvature), then ZPE should be a very strong player in galaxies and galactic clusters, and should be a bear in the vicinity of a black hole. If that is so, black holes should evaporate much more quickly in SCC than as predicted under GR. The real particles created by the capture of their antiparticles by the black hole will be promoted to their "real" states at extremely high energies.

Since both members of the virtual particle-antiparticle pair have mass, the black hole will swallow either with no preference. The area outside the event horizon should therefor consist of a mix of real particles and antiparticles at very high energy states, producing some very "interesting" interactions. It seems to me that no black hole can ever appear black under these conditions. Could this be the source of quasar luminosity?

As I said above, my math is inadequate to model this. Have you done so, Garth?

Thank you again for your explanation!
 
Last edited:
  • #20
chroot said:
If you want to consider astrobiology in the same breath, the water gets considerably muddier.

Don't let it be said that chroot doesn't have a sense of humor!

astrobiology...search for water...ah, never mind
 
  • #21
Garth said:
The first step would be to modify a LIGO interferometer by truncating one of its beams and sending it straight back, the Sun will do the rest. – And that would be even cheaper!
IIRC, one of the (European?) gravity wave detectors was to be built with the two perpendicular arms of (considerably) unequal length - do you know which one (or is my memory failing, again)? Would it do the trick?
 
  • #22
Garth said:
Many cosmologists, such as Kenneth Nordtvedt, have said that the experiment was worth doing when it was first planned in the 1960s, but that today the result is a foregone conclusion.

Accepting a "foregone conclusion" without bothering to make an observation is bad science. The phenominon GP-B is trying to measure has never been observed, and should not be taken for granted until quantitive measurements
have been made.
 
  • #23
Nereid said:
IIRC, one of the (European?) gravity wave detectors was to be built with the two perpendicular arms of (considerably) unequal length - do you know which one (or is my memory failing, again)? Would it do the trick?
Thank you for that suggestion, yes it should do the trick. However all the laser beam interferometers I know seem to have two arms of the same length. However the VIRGO set up bounces the beam between mirrors to get an optical length of 120km. Truncating that beam would only require a resetting of one of the mirrors. VIRGO is near Pisa in Italy, which is a nice historical connection!

The crucial factor would be the difference in the path lengths between the two beams.
If that difference is L then the two beams would be displaced relative to each other in a direction towards the Sun by
d = (1/4)gsun(L/c)^2 [~ 2 x 10^(-12) m for LIGO]
where gsun is the Earth's acceleration towards the Sun, about 1 cm/sec/sec and c the speed of light.

LIGO can detect a movement 10^(-18)m longitudinally, but I am talking about a more or less vertical cyclical displacement with a period of 24hrs.
 
Last edited:
  • #24
Garth said:
For example if gravitation is adequately described by GR then the observation that space-time is flat means the total density parameter is unity. But in BD part of that density is scalar field energy and in SCC space-time flatness means a total density parameter of one third. Thus the conclusion about how much Dark Matter and Energy is out there depends on which gravitational theory you use to analyse the data with.
So, can the *data* be analysed according to different assumptions? (Yes)

If the data are analysed wrt different models, can it be concluded that *the data* are consistent with model a, model b, both, neither? (hopefully Yes)

When one analyses WMAP data (for example), or SDSS + 2dF data, or distance SNe data, according to SCC, what estimates of the key (free) parameters in SCC does one get? What are the error bars on those estimates? How about a model which incorporates turbo-1's speculation re ZPE?
 
  • #25
Nereid said:
When one analyses WMAP data (for example), or SDSS + 2dF data, or distance SNe data, according to SCC, what estimates of the key (free) parameters in SCC does one get? What are the error bars on those estimates? How about a model which incorporates turbo-1's speculation re ZPE?

These are good questions that some require some work to answer. As I have said elsewhere there has been funding to do that work within the GR paradigm, outside it is a little more difficult.

However SCC is a freely coasting cosmology in its Einstein frame in which physical processes are best described and already calculated. The freely coasting model does seem to be concordant with the data.
I cannot speak of turbo-1's ZPE model but the CIPA site has quite a lot of information about it. Have you seen Eric Lerner's papers on Plasma Cosmology? I am not necessarily advocating any of them but drawing attention to the existence of these other approaches under which the data you mention delivers different conclusions.
 
  • #26
Nereid said:
How about a model which incorporates turbo-1's speculation re ZPE?
Please do not ask Garth to justify SCC by supplying proofs for my "speculation".

I have been thinking about my "speculations" for years, and have pursued them more vigorously for the past few months. Garth has been working very hard and sticking his neck out for decades. He should not be asked to defend the "speculations" of an amateur in cosmology.

Thanks.
 
Last edited:
  • #27
Nereid said:
When one analyses WMAP data (for example), or SDSS + 2dF data, or distance SNe data, according to SCC, what estimates of the key (free) parameters in SCC does one get? What are the error bars on those estimates?
To be more specific about SCC.
It is a highly determined theory, giving just one model with fixed cosmological parameters that can be interpreted either in its Einstein frame (particle masses conserved), suitable for comparison with observations of physical features of the universe, or its Jordan frame (photon energies conserved), in which gravitational fields and orbits are described.

It is therefore highly falsifiable .

The surprising thing is though, this determined model does seem to be concordant with observations, although it is not possible to replicate all the work that has been done with the standard paradigm, I could do with some help!

What is this determined model? In the Einstein frame it is:-

i. A linearly expanding model R(t) = t, it therefore provides the mechanism missing from the work done on the 'freely coasting universe'. Concordant with WMAP data and distant S/N Ia data.

The Indian team have done a lot of the required work I mentioned above. Their motivation, starting with Kolb's 1989 paper (Ap.J. 344 543-550 1989 'A Coasting Cosmology') was based on the same insight that started me off developing the New Self Creation Cosmology theory was that in a linearly expanding model the density, smoothness and horizon problems of GR cosmology that Inflation was devised to fix would not exist in the first place, hence Inflation would be unnecessary. (My original paper was Barber, G.A. : 1982, Gen Relativ Gravit. 14, 117. 'On Two Self Creation Cosmologies'.)

ii. The curvature constant was +1, the universe was closed, a space-like surface is a sphere. But because of its linear expansion space-time was conformally flat. (In the Einstein frame a time-like slice was a hyper-cone, in its Jordan frame a hyper-cylinder - in both cases slit up the time axis and unroll to a flat sheet.) This may resolve the low frequency problem of the WMAP spectrum (no large angle fluctuations) otherwise resolved by suggestions of a 'football' universe etc. The universe appears flat (as the surface of a cone or cylinder) and yet finite in size.

iii. Although the universe is finite in size and space-time is conformally flat its total density parameter is only 1/3, 0.33. If this seems inconsistent remember the Friedmann equations have changed because the basic GR field equation has changed; (change the theory and the observations using that theory change etc. etc.) There is no need for Dark Energy.

iv. The density parameter of the zero-point energy, the false vacuum is determined by the SCC field equations to be 1/9, 0.11.

v. The density parameter of the rest is therefore 2/9 or 0.22. The freely coasting model suggests baryon density to be 0.20 and not 0.04 so there is no need for Dark Matter. The neutrino density now appears to be about 0.01 – 0.02. (New Scientist 4 Sep 04 pg 39 "Weighing the invisible") So the inventory of the universe is more or less complete!

I am not sure of the SCC solution for a Black Hole and therefore cannot say what happens in severe curvature nor have I been able to study the Sloan Digital Sky Survey with respect to the theory, there is a lot of work still to do, nevertherless I am not discouraged and await GPB - which as you can imagine I do not think is a waste of money!
- Garth
 
Last edited:
  • #28
Garth, SCC predicts that photons fall at 3/2 the speed that particles fall in a gravitational field, breaking that GR equivalence. Is there a similar prediction in SCC that antimatter would fall faster than matter in a gravitational field, thereby breaking the gravity/inertia equivalence in the presence of mass?

You can probably see where I'm going with this...a mechanism whereby the ZPE field is aligned (curved) by matter, with the anti-particle of each virtual pair more strongly drawn toward nearby matter than its particle partner.

If matter causes the virtual pairs in the quantum vacuum surrounding it to arise in a preferential orientation, there is a simple mechanism to explain space-time curvature, and gravitation might be explained without the need for the Higgs fields, gravitrons, etc. That gravity arose from the interaction of matter with the fields of the quantum vacuum was proposed by Andrei Sakharov almost 40 years ago (and more recently followed up by the CIPA group and others). I have not found in any of their papers a plausible mechanism for the interaction, except some rather unhelpful references to the Davies-Unruh effect. A differential in the matter/anti-matter fall rate would do the trick.

Anyway, I have been searching the web looking for any on-going experiment to test the fall rate of antimatter in a gravitational field, but have found only proposals and no conclusive results. The arrival times of neutrinos and anti-neutrinos from SN1987A have been cited as evidence that the fall rates are essential equivalent, but neutrinos and anti-neutrinos are so weakly interactive that their fall rates might be statistically equivalent anyway. The are chargeless and they only react to the weak force, and so would not behave in the same manner as the basically EM particle/anti-particle virtual pairs of the ZPE field.
 
  • #29
turbo-1 - An interesting question... hmmm...thank you.

In SCC there would appear to be no difference in the way matter and anti-matter react to the gravitational field. The differences are to be found when the internal pressure becomes significant. Actually photons obey the equivalence principle, it is slow moving particles that experience an upwards scalar field force, which decouples as the pressure increases to 1/3density c^2. Unless the internal pressure of anti-matter is different to that of ordinary matter there would be no difference. The false vacuum on the other hand experiences anti-gravity of 1/2g..makes you think...
Garth
 
  • #30
Garth said:
Have you seen Eric Lerner's papers on Plasma Cosmology? I am not necessarily advocating any of them but drawing attention to the existence of these other approaches under which the data you mention delivers different conclusions.
Possibly not his, but some time ago meteor (or someone else?) posted a link to a 64-page preprint that may have been in this vein ... it was certainly interesting, and brought home to me just how huge the task of anyone developing a truly independent set of cosmological models is (SCC seems to face much smaller challenges, as it more directly builds on so much of the concordance views) ... IMHO, even 640 pages would be enough!
 
  • #31
turbo-1 said:
Please do not ask Garth to justify SCC by supplying proofs for my "speculation".

I have been thinking about my "speculations" for years, and have pursued them more vigorously for the past few months. Garth has been working very hard and sticking his neck out for decades. He should not be asked to defend the "speculations" of an amateur in cosmology.

Thanks.
My apologies to any reader who, like turbo-1, may have misunderstood what I was saying.

To clarify, I was trying to say that the proponents of *any* approach (other than 'the concordance model') could be asked to provide estimates of the (free) parameters in their model(s), as determined from analysis of (publicly available) astronomical datasets. IOW, don't just 'tell us what your theory is', also tell us what 'analysing the best available data, we find that our model is consistent, and estimates of the key parameters are {list, inc error bars, with a statistical metic}.'
 
  • #32
I'll drink to that ! Garth
 
  • #33
Garth, I appreciate the effort you have put into SCC. I read your paper and it is interesting. I still think the biggest problem you face is that SCC predicts a universe that forms too early and collapses before stars and galaxies can form. Can you reform your model that explains how the universe behaves now? I think not. The model Nereid suggests has observational evidence. In fact, she has a mountain of evidence in her favor.
 
Last edited:
  • #34
Garth said:
turbo-1 - An interesting question... hmmm...thank you.

In SCC there would appear to be no difference in the way matter and anti-matter react to the gravitational field. The differences are to be found when the internal pressure becomes significant. Actually photons obey the equivalence principle, it is slow moving particles that experience an upwards scalar field force, which decouples as the pressure increases to 1/3density c^2. Unless the internal pressure of anti-matter is different to that of ordinary matter there would be no difference. The false vacuum on the other hand experiences anti-gravity of 1/2g..makes you think...
Garth
I chewed on this question quite a while yesterday. Until then (as I posted above regarding black hole evaporation) I had assumed that ZPE particle-antiparticle masses and fall rates are essentially equivalent. It occurred to me though that if space-time (as expressed by the EM ZPE field) can be curved by matter, there should be a simple mechanism to cause the curvature. Going back to the basics (my automatic fall-back position, since I have to do all this in my head...duh), I considered what could be different about the matter-antimatter particles in virtual pairs that would align them in a gravitational field. I thought about the field of pairs flipping like magnets to their most entropic state (antimatter oriented toward the large mass, matter particles oriented away) using the "opposites attract" approach...:confused: That may ultimately be a proper model, but it left me wondering what would cause the "opposites attract" approach to work, aside from "force acting over a distance". That led me to the notion that the fall rate of antimatter in a gravitational field might be higher than that of matter. We really need a definitive test of the fall rate of antimatter - the CERN data were inconclusive.

That bit of asymmetry could polarize the ZPE field in the presence of large masses. It could perhaps explain a few other things. One implication for such virtual-pair alignment in the process of black hole "evaporation" would be that the black hole would capture more anti-particles than particles. That would result in more particles than anti-particles being promoted from virtual to "real" status outside the event horizon. After the inevitable (and very energetic) annihilation events near the event horizon, there would remain a net excess of new real particles to form matter (after they cooled from the ultra hot plasma state!). This is probably not going to be testable in any real sense, unless quasars are what we see when black holes behave this way.

As an extension: We see matter all around us, not anti-matter. Assuming that the universe began with equal proportions of each, could this black-hole behavior be a model for how anti-matter and matter were separated? If so, beyond the event horizons of these massive objects would be domains dominated by anti-matter. Lee Smolin has described our Universe as one fine-tuned to produce black holes (a rational alternative to the anthropic principle!), and he speculated that a prospective inhabitant of the universe in a black hole would look out through his universe's past toward a singularity, much as we view our universe in standard cosmology. I can't find that paper, now, but I'm pretty certain he didn't cite a matter/antimatter selection effect. To go one step farther out on the limb :rolleyes: , these antimatter "universes" should all have equivalent black holes that preferentially eat matter, creating nice matter-rich pockets like the one we live in. Yep, it's turtles all the way down.
 
Last edited:
  • #35
Chronos said:
Garth, I appreciate the effort you have put into SCC. I read your paper and it is interesting. I still think the biggest problem you face is that SCC predicts a universe that forms too early and collapses before stars and galaxies can form. Can you reform your model that explains how the universe behaves now? I think not. The model Nereid suggests has observational evidence. In fact, she has a mountain of evidence in her favor.
I do not know where your concept of the SCC universe came from. In the Einstein frame it expands linearly, more slowly than the GR model R(t) =t, and does not contract at all, in the Jordan frame it is static R(t) = Ro. An expanding universe with fixed rulers is replaced by a fixed universe and shrinking rulers. Garth
 

Similar threads

Replies
5
Views
2K
Replies
371
Views
121K
Replies
2
Views
4K
Replies
5
Views
2K
Replies
15
Views
2K
Replies
7
Views
3K
Replies
2
Views
2K
Replies
30
Views
2K
Back
Top