Fine structure constant probably doesn't vary with direction in space

In summary, a thread discussing the variation of the fine structure constant in space was locked due to a lack of citations from refereed journals. However, multiple papers have been published on the topic and there is ongoing debate about the validity of the claims. Some believe that the evidence is not strong enough to support the idea that the constant varies over time and in different directions, while others suggest that there may be a calibration issue or other systemic differences at play. The topic remains controversial, but is still being explored and published in reputable journals.
  • #36
twofish-quant said:
But you'll find that almost everything doesn't work, and you have to be very clever at finding things that fit the data.
My claim is that it is *in principle* easy to model accelerating universes within the standard
framework. That a particular set of data is hard to fit such models is irrelevant. Anyway, these
difficulties hardly mean that the industry of modelling accelerating universes within the
mainstream framework will be shut down anytime soon.
twofish-quant said:
How? The fine structure constant contains the mass of the electron, Planck's constant, and the speed of light. Of those three, GR only uses the speed of light. GR knows nothing about Planck's constant or the electron.
In general, it is necessary to have LPI in order to model gravity entirely as a "curved space-
time"-phenomenon. A varying fine structure constant would only be a special case of LPI-violation.
See the textbook referenced below.
twofish-quant said:
If you have references to specific textbooks, then we can discuss the issue there. I have copies of Wald, Weinberg, and Thorne on my bookshelf, and if you can point me to the page where they claim that a changing fine structure constant would violate GR, I'll look it up. Also, I know some of these people personally, so if you have a specific question, I can ask them what they think the next time I see them.
There is a nice discussion of the various forms of the EP and their connection to gravitational theories
in Clifford Will's book "Theory and experiment in gravitational physics".
twofish-quant said:
Also one rule in science. All models are wrong, some models are useful. If there is some fundamental misunderstanding about gravity, then we just go back and figure out the implications on observational conclusions. Also, you can think of things before hand. A paper "so what would the impact of a time varying fine structure constant?" is something that makes a dandy theory paper.
But how can you write such a paper without having a theory yielding the quantitative machinery
necessary to make predictions? Sure, you can put in a time-varying fine structure by hand in the
standard equations, but as I pointed out earlier, this approach is fraught with danger.
twofish-quant said:
I don't see any double standards here.
No, not here. I was speaking generally.
 
Space news on Phys.org
  • #37
cesiumfrog said:
The EEP is already wrong according to GR, since the local electrostatic field of an electric charge is different depending on, for example, whether you perform the experiment in the vincinity of a black hole or in an accelerated frame in flat space. (Think of the field lines interrogating the surrounding topology.)
Electromagnetic fields are in general not "local", so arguments based on the EP may be misleading.

But in your example, the *local* electrostatic field of the charge is not different for the two cases; if you
go to small enough distances from the charge the two cases become indistinguishable.
cesiumfrog said:
The truth of the EEP is uncoupled from the truth of GR. Whether a hypothetical phenomena violates position invariance has no bearing on whether GR has been experimentally verified to correctly predict gravitational motion. (At worst, it changes how text-book authors post-hoc motivate their derivations of GR. Analogously SR does not cease viability despite the fact that its supposed inspiration, the perspective from riding a light beam, is now realized to be unphysical.)
The connection between the EEP and gravitational theories is described in the book
"Theory and experiment in gravitational physics" by Clifford Will. Please read that and tell us
what is wrong with it.
cesiumfrog said:
Consider a field, X, which permeates spacetime. Let there exist local experiments that depend on the local values of X. Does this falsify GR?
If X is coupled to matter fields in other ways than via the metric, yes this would falsify GR.
cesiumfrog said:
You are inconsistent claiming the answer is yes (if X is the new alpha field, which causes slightly different atomic spectra in different places) whilst also tacitly no (if X is any other known field, e.g., the EM field which by the Zeeman effect also causes slightly different atomic spectra in different places).
The alpha field does not couple to matter via the metric. Therefore, if it is not a constant, it would
falsify GR. In a gravitational field, Maxwell's equations locally take the SR form. Therefore, the EM
field couples to matter via the metric and does not falsify GR. Your example is bad and misleading.
 
  • #38
My claim is that it is *in principle* easy to model accelerating universes within the standard
framework.

My claim is that an accelerating universe causes all sorts of theoretical problems. One is the hierarchy problem. If you look at grand unified theories, there are terms that cause positive cosmological constants and those that cause negative ones, and you have unrelated terms that are different by hundreds of orders of magnitude that balance out to be almost zero.

Before 1998, the sense among theoretical high-energy cosmologists was that these terms would have some sort of symmetry that would cause them to balance out exactly. Once you put a small but non-negative cosmological constant then you have a big problem since it turns out that there is no mechanism to cause them to be exactly the same, and at that point you have to come up with some mechanism that causes the cosmological constant to evolve in a way that doesn't result in massive runaway expansion.

Also, adding dark energy and dark matter is something not to be done lightly.

Anyway, these difficulties hardly mean that the industry of modelling accelerating universes within the mainstream framework will be shut down anytime soon.

I'm not sure what is the "mainstream framework." I'm also not sure about what the point you are making. You seem to be attacking scientists for being closed minded, but when I point out that none of the scientists that I know are holding the dogmatic positions that you claim they are holding, then you contradict that.

I've seen three theoretical approaches to modelling the accelerating universe. Either you assume

1) some extra field in dark energy,
2) you assume that GR is broken, or
3) you assume that GR is correct and people are applying it incorrectly.

Attacking the observations is difficult, because you in order to remove that you have to find some way of showing that measurements of the Hubble expansion *AND* CMB data *AND* galaxy count data are being misinterpreted.

Alternative gravity models are not quite completely dead for dark matter observations, but they are bleeding heavily. There are lots of models of alternative gravity that are still in play for dark energy. The major constraints for those models are 1) we have high precision data from the solar system that seems to indicate the GR is good for small scales and 2) there are very strong limits as are as nucleosynthesis goes. If you just make up any old gravity model, the odds are you'll find that the universe either runs away expanding or collapses immediately, and you don't even get to matching correlation functions.

People are throwing everything they can at the problem. If you think that there is some major approach or blind spot that people are having, I'd be interested in knowing what it is.

Old Smuggler;2873303In general said:
But what does the fine structure constant have anything to do with gravity? Of the three components of the fine structure constant, only one has anything to do with gravity. The other two (Planck's constant and the charge of the electron) have nothing at all to do with gravity.

Now it is true that if you had a varying fine structure constant, you couldn't model EM as a purely geometric phenomenon which means that Kaluza-Klein models are out, but those have problems with parity violation so that isn't a big deal.

In any case, I do not see what is so sacred about modelling gravity as a curved space time approach (and more to the point, neither does anyone else I know in the game).

People have had enough problems with modelling the strong and weak nuclear forces in terms of curved space time, that it's possible that the "ultimate theory" has nothing to do with curved space time. We already know that the universe has chirality, and that makes it exceedingly difficult to model with curved space time. Supersymmetry was an effort to do that, but it didn't get very far.

There is a nice discussion of the various forms of the EP and their connection to gravitational theories in Clifford Will's book "Theory and experiment in gravitational physics".

So what does any of this have to do with EM?

But how can you write such a paper without having a theory yielding the quantitative machinery
necessary to make predictions?

You assume a theory and then assume the consequences, and then you look for consequences that are excluded by observations. The theory doesn't have to be correct, and one thing that I've noticed about crackpots is that they seem overly concerned about having their theories be correct rather than having them being useful. Newtonian gravity is strictly speaking incorrect, but its useful, and for high precision solar system calculations, people use PPN, which means that it's possible that the real theory of gravity has very different high order terms than GR.

Sure, you can put in a time-varying fine structure by hand in the standard equations, but as I pointed out earlier, this approach is fraught with danger.

I'm not seeing the danger. You end up with something that gets you numbers and then you observe how much those numbers miss what you actually see.

What you end up isn't elegant, and it's likely to be wrong, but GR + ugly modifications will be enough for you to make some predictions and guide your observational work until you have a better idea of what is going on.

About double standards. My point is that among myself and theoretical astrophysicists that I know, the idea of a time or spatially varying fine structure constant is no odder than an accelerating universe.

One thing about the fine structure constant is if the idea of broken symmetry is right, then the number is likely to be random. The current idea about how high energy physics is that the electro-weak theory and GUT are symmetric and elegant at high energies, but once you get to lower energies, the symmetry breaks.

The interesting thing is that the symmetry can break in different ways, and the fact that the fine structure constant is what it is out of just randomness. The fact that the fine structure constant could very well be just a random number that is different in different universes.
 
  • #39
twofish-quant said:
I'm not sure what is the "mainstream framework." I'm also not sure about what the point you are making. You seem to be attacking scientists for being closed minded, but when I point out that none of the scientists that I know are holding the dogmatic positions that you claim they are holding, then you contradict that.
Mainstream framework=GR + all possible add-ons one may come up with. The only point I was
making is that IMO, it would be much more radical to abandon the mainstream framework
entirely than adding new entities to it. Therefore, since the latter approach is possible in principle for
modelling an accelerating universe, but not for modelling a variable fine structure constant, any
claims of the latter should be treated as much more extraordinary than claims of the former. But we
obviously disagree here, so let's agree to disagree. I have no problems with that.
twofish-quant said:
But what does the fine structure constant have anything to do with gravity? Of the three components of the fine structure constant, only one has anything to do with gravity. The other two (Planck's constant and the charge of the electron) have nothing at all to do with gravity.
A variable "fine structure constant field" would not couple to matter via the metric, so it would
violate the EEP and thus GR.
twofish-quant said:
So what does any of this have to do with EM?
See above. Why don't you just read the relevant part of the book before commenting further?
twofish-quant said:
You assume a theory and then assume the consequences, and then you look for consequences that are excluded by observations. The theory doesn't have to be correct, and one thing that I've noticed about crackpots is that they seem overly concerned about having their theories be correct rather than having them being useful. Newtonian gravity is strictly speaking incorrect, but its useful, and for high precision solar system calculations, people use PPN, which means that it's possible that the real theory of gravity has very different high order terms than GR.
But for varying alpha you don't have a theory - therefore there is no guarantee whatever
you are doing is mathematically consistent.
twofish-quant said:
I'm not seeing the danger. You end up with something that gets you numbers and then you observe how much those numbers miss what you actually see.
But there is no guarantee that these numbers will be useful. Besides, if you depend entirely on
indirect observations, there is no guarantee that the "observed" numbers will be useful, either. That's the danger...
twofish-quant said:
What you end up isn't elegant, and it's likely to be wrong, but GR + ugly modifications will be enough for you to make some predictions and guide your observational work until you have a better idea of what is going on.
But chances are that this approach will not be useful and that your observational work will be misled
rather than guided towards something sensible.
twofish-quant said:
About double standards. My point is that among myself and theoretical astrophysicists that I know, the idea of a time or spatially varying fine structure constant is no odder than an accelerating universe.
I have given my reasons for disagreeing, and I think your arguments are weak. But that is consistent
with my original claim - that sorting out "extraordinary" claims from ordinary ones is too subjective
to be useful in the scientific method.
 
  • #40
cesiumfrog said:
The EEP is already wrong according to GR, since the local electrostatic field of an electric charge is different depending on, for example, whether you perform the experiment in the vincinity of a black hole or in an accelerated frame in flat space. (Think of the field lines interrogating the surrounding topology.)
Well, not really. Examples of this type are complicated to interpret, and there has been longstanding controversy about them. Some references:

Cecile and Bryce DeWitt, ``Falling Charges,'' Physics 1 (1964) 3
http://arxiv.org/abs/quant-ph/0601193v7
http://arxiv.org/abs/gr-qc/9303025
http://arxiv.org/abs/physics/9910019
http://arxiv.org/abs/0905.2391
http://arxiv.org/abs/0806.0464
http://arxiv.org/abs/0707.2748
 
  • #41
Old Smuggler said:
The EEP describes how the local non-gravitational physics should behave in an external gravitational
field. Moreover, the EEP consists of 3 separate parts; (i) the Weak Equivalence Principle (WEP) (the
uniqueness of free fall), (ii) Local Lorentz Invariance (LLI), and finally (iii) Local Position Invariance
(LPI). LPI says that any given local non-gravitational test experiment should yield the same
result irrespective of where or when it is performed; i.e., the local non-gravitational physics should not
vary in space-time. A class of gravitational theories called "metric theories of gravity" obeys the EEP.
Since GR is a metric theory, any measured violation of the EEP would falsify GR. That would be
serious. A varying fine structure constant represents a violation of the EEP, so this would falsify GR.
The way you've stated LPI seems to say that the e.p. is trivially violated by the existence of any nongravitational fundamental fields. For example, I can do a local nongravitational experiment in which I look at a sample of air and see if sparks form in it. This experiment will give different results depending on where it is performed, because the outcome depends on the electric field.
 
  • #42
Old Smuggler said:
But in your example, the *local* electrostatic field of the charge is not different for the two cases; if you go to small enough distances from the charge the two cases become indistinguishable.
No finite distance is small enough. (And no physical experiment is smaller than finite volume.) I think bcrowell's citing of controversy shows, at the very least, that plenty of relativists are less attached to EEP than you are portraying.

Old Smuggler said:
The connection between the EEP and gravitational theories is described in the book
"Theory and experiment in gravitational physics" by Clifford Will. Please read that and tell us
what is wrong with it.
How obtuse. If the argument is too complex to reproduce, you could at least have given a page reference. But let me quote from that book for you: "In the previous two sections we showed that some metric theories of gravity may predict violations of GWEP and of LLI and LPI for gravitating bodies and gravitational experiments." My understanding is that the concept of the EEP is simply what inspired us to use metric theories of gravity. That quote seems to show your own source contradicting your notion that LPI is prerequisite for metric theories of gravity.

Old Smuggler said:
If X is coupled to matter fields in other ways than via the metric, yes this would falsify GR.
Could you clarify? Surely the Lorentz force law is a coupling other than via the metric (unless you're trying to advocate Kaluza-Klein gravity)? (And what about if X is one of the matter fields?)
 
Last edited:
  • #43
The biggest theoretical issues, that I can see, for the spatially varying fine structure idea is that its very difficult to do 3 things simulatenously:

1) Create a field that has a potential that varies smoothly and slowly enough, such that it still satisfies experimental constraints (and there are a lot of them, judging by the long author list in the bibliography).

2) Explain why the constant in front of the potential is so ridiculously tiny. This is a similar hierarchy type problem to the cosmological constant, and seems very unnatural if the field is to be generated in the early universe.

3) Any purported theory will also have to explain why the fine structure constant continues to evolve, but not any other gauged coupling (and you see once you allow for multiple couplings to evolve, you run into definition problems b/c its really only ratio's that are directly measurable). That definitely has some tension with electroweak and grand unification.

Anyway, its obviously a contrived idea in that it breaks minimality and doesn't help to solve any other obvious theoretical problem out there. Further, depending on the details of how you setup the theory, you have to pay a great deal of attention to the detailed phenomenology. Like for instance wondering about the nature of the field's (which may or may not be massless, and hence responsible for equivalence principle friction) effects on say big bang nucleosynthesis bounds and things like that.
 
  • #44
I'm confused by (nearly) all the arguments here- nobody is really discussing whether or not the data can be explained by instrument error, data analysis error, or fraud.

Claiming the data must be explainable by instrument error simply because the results conflict with theory is not valid.

I read the ArXiv paper ("submitted to PRL"), and I started the ArXiv paper where they 'refute the refuters', but the two papers that they claim will have a detailed error analysis are still 'in preparation'.

I can't authoritatively claim that their error analysis is valid, because I don't fully understand the measurement (and haven't read their detailed explanation). However, it appears that they have in fact obtained a statistically significant result.

I would like to know more about their method of data analysis- specifically, steps (i) and (ii) on page 1, and their code VPFIT. Does anyone understand their method?
 
  • #45
Andy Resnick said:
I'm confused by (nearly) all the arguments here- nobody is really discussing whether or not the data can be explained by instrument error, data analysis error, or fraud.
Thank you.
 
  • #46
Michael Murphy gives a fairly good overview of the research here:

http://astronomy.swin.edu.au/~mmurphy/res.html"
 
Last edited by a moderator:
  • #47
Andy Resnick said:
I'm confused by (nearly) all the arguments here- nobody is really discussing whether or not the data can be explained by instrument error, data analysis error, or fraud.

I go for data analysis error. The effects that they are looking for are extremely small, and there is enough uncertainty in quasar emission line production that I'm I don't think that has been ruled out right now.

Also, it's worth pointing out that other groups have done similar experiments and they claim results are consistent with zero.

http://arxiv.org/PS_cache/astro-ph/pdf/0402/0402177v1.pdf

There are alternative cosmological experiments that are consistent with zero

http://arxiv.org/PS_cache/astro-ph/pdf/0102/0102144v4.pdf

And there are non-cosmological experiments that are consistent with zero

http://prl.aps.org/abstract/PRL/v93/i17/e170801
http://prl.aps.org/abstract/PRL/v98/i7/e070801

See also 533...

In this section we compare the O iii emission line method
for studying the time dependence of the fine-structure constant
with what has been called the many-multiplet method. The
many-multiplet method is an extension of, or a variant on,
previous absorption-line studies of the time dependence of .
We single out the many-multiplet method for special discussion
since among all the studies done so far on the time
dependence of the fine-structure constant, only the results
obtained with the many-multiplet method yield statistically
significant evidence for a time dependence. All of the other
studies, including precision terrestrial laboratory measurements
(see references in Uzan 2003) and previous investigations
using quasar absorption lines (see Bahcall et al.
1967; Wolfe et al. 1976; Levshakov 1994; Potekhin &
Varshalovich1994;Cowie&Songaila1995; Ivanchiket al.1999)
or AGN emission lines (Savedoff 1956; Bahcall & Schmidt
1967), are consistent with a value of  that is independent of
cosmic time. The upper limits that have been obtained in the
most precise of these previous absorption-line studies are
generallyj=ð0Þj< 2  104, although Murphy et al.
(2001c) have given a limit that is 10 times more restrictive.
None of the previous absorption-line studies have the sensitivity
that has been claimed for the many-multiplet method.​


Claiming the data must be explainable by instrument error simply because the results conflict with theory is not valid.

True, but the problem is that there results look to me a lot like something that comes out of experimental error. Having a smooth dipole in cosmological data is generally a sign that you've missed some calibration. It's quite possible that what is being missed has nothing to do with experimental error. I can think of a few ways you can get something like that (Faraday rotation due to polarization in the ISM).

If you see different groups using different methods and getting the same answers, you can rule at experimental error. We aren't at that point right now.

I can't authoritatively claim that their error analysis is valid, because I don't fully understand the measurement (and haven't read their detailed explanation). However, it appears that they have in fact obtained a statistically significant result.

The problem that I have is that any statistical error analysis simply will not catch systematic biases that you are not aware of, so while an statistical error analysis will tell you if you've done something wrong, it won't tell you that you've got everything right.

The reason that having different groups repeat the result with different measurement techniques is that this will make the result less vulnerable to error. If you can find evidence of shift in anything other than Webb group, that would change things a lot.
 
  • #48
Old Smuggler said:
Mainstream framework=GR + all possible add-ons one may come up with.

There's a lot of work in MOND for dark matter that completely ignores GR.

A variable "fine structure constant field" would not couple to matter via the metric, so it would violate the EEP and thus GR.

GR is solely a theory of gravity which a prescription of how to convert non-gravitational theory to include gravity. If you have any weird dynamics then you can fold that into the non-gravitational parts of the theory without affecting GR.

See above. Why don't you just read the relevant part of the book before commenting further?

Care to give a page number?

But for varying alpha you don't have a theory - therefore there is no guarantee whatever you are doing is mathematically consistent.

Since quantum field theory and general relativity itself are not mathematically consistent, that's never stopped anyone. You come up with something and then let the mathematicians clean it up afterwards.

But there is no guarantee that these numbers will be useful. Besides, if you depend entirely on indirect observations, there is no guarantee that the "observed" numbers will be useful, either. That's the danger...

Get predictions, try to match with data, repeat.

But chances are that this approach will not be useful and that your observational work will be misled rather than guided towards something sensible.

Yes you could end up with a red herring. But if you have enough people doing enough different things, you'll eventually stumble on to the right answer.
 
  • #49
matt.o said:
Michael Murphy gives a fairly good overview of the research here:

http://astronomy.swin.edu.au/~mmurphy/res.html"

I think his last two paragraphs about it not mattering whether c or e is varying are incorrect.

The thing about c is that it's just a conversion factor with no real physical meaning. You can set c=1, and this is what most people do. e is the measured electrical charge of the electron and it does have a physical meaning. You'd have serious theoretical problems in GR if c were changing over time, but you wouldn't have any problems if e or h were, since GR doesn't know anything about electrons.
 
Last edited by a moderator:
  • #50
twofish-quant said:
The effects that they are looking for are extremely small, and there is enough uncertainty in quasar emission line production that I'm I don't think that has been ruled out right now.
Still, what such uncertainty would be explain why the data set from either telescope separately gives the same direction for the dipole? Do you think it is an artifact of the milky way?
 
  • #51
cesiumfrog said:
Still, what such uncertainty would be explain why the data set from either telescope separately gives the same direction for the dipole?

I'm thinking of some effect that correlates with the direction of the telescope. For example, if it turned out that quasars with strong jets had magnetospheres with charged particles caused the lines to drift, and it so happens because you are looking in different directions you are more likely to see quasars with strong jets because those with weak ones are more likely to get obscured by interstellar dust.

Or it turns out that when they did the star catalogs that they did them in a way that certain types of quasars are preferred in one part of the sky and not in others.

Do you think it is an artifact of the milky way?

Or the local ISM. You said yourself that dipoles are usually a sign of something changing at much greater scales that your observational volume. If your observational volume is the observable universe, you have something hard to explain. If it turns out that what you are seeing is nearby, it's much less hard to explain.

I think they've done a reasonable job of making sure that their result isn't equipment related, and that's important, because if it turns out that there is some unknown local ISM effect that changes quasar line widths in odd ways, that's still pretty interesting physics.
 
  • #52
Andy Resnick said:
I'm confused by (nearly) all the arguments here- nobody is really discussing whether or not the data can be explained by instrument error, data analysis error, or fraud. [...] I can't authoritatively claim that their error analysis is valid, because I don't fully understand the measurement (and haven't read their detailed explanation). However, it appears that they have in fact obtained a statistically significant result.
Your comments are very reasonable, so I'll take a shot at discussing what I perceive as the (lack of) reliability of the measurement itself. My research experience is in spectroscopy, and although it was a different kind of spectroscopy (gamma rays), I think it may help to give me a feel for what's going on here. Take a look at figure 3 on p. 5 of this paper http://arxiv.org/abs/astro-ph/0306483 . They're fitting curves to absorption lines in a histogram, with some background. I used to do this for a living, albeit in a different energy range, and with positive emission lines rather than negative absorption lines. It is a *very* tricky business to do this kind of thing and determine the centroids of peaks with realistic estimates of one's random and systematic errors. Is it adequate to treat the background as flat, or do you need to include a slope? Maybe it should be a second-order polynomial? How sure are you that the profile of the peak is really, exactly Gaussian? Most importantly, there may be many low-intensity peaks that overlap with the high-intensity peak. Given all of these imponderables, it becomes absolutely *impossible* to be certain of your error bars. Computer software will be happy to tell you your random errors, but those are only the random errors subject to the constraints and assumptions that you fed into the software. It is very common in this business to find that results published by different people in different papers differ by a *lot* more than 1 sigma. The error bars that people publish are whatever the software tells them they are, but roughly speaking, I would triple those error bars because the software can't take into account the errors that result from all the imponderables described above.

In view of this, take a look at the second figure in http://astronomy.swin.edu.a/~mmurphy/res.html , the graph with redshift on the x-axis and [itex]\Delta\alpha/\alpha[/itex] on the y axis. With the given error bars, the line passes through the error bars on 8 out of 13 of the points. On a gaussian distribution, you expect the to be off be no more than 1 sigma about 2/3 of the time. Hey, 8/13 is very close to 2/3. So even if you believe their error bars, the evidence isn't exactly compelling. OK, it's true that 13 out of 13 points lie below the line. This is statistically improbable unless [itex]\Delta\alpha=0[/itex]. The chances that 13 out of 13 points would all lie on the same side is 2^(-12), which is on the order of 0.01%. But let's be realistic. Those error bars are probably three times too small. That means that 13 out of 13 of those data points are within 1 sigma of the line. Nobody in their right mind would take that as evidence that the data deviated significantly from the line.

This is why I would characterize these results using the technical term "crap."
 
Last edited by a moderator:
  • #53
twofish-quant said:
I'm thinking of some effect that correlates with the direction of the telescope. For example, if it turned out that quasars with strong jets had magnetospheres with charged particles caused the lines to drift, and it so happens because you are looking in different directions you are more likely to see quasars with strong jets because those with weak ones are more likely to get obscured by interstellar dust.

Or it turns out that when they did the star catalogs that they did them in a way that certain types of quasars are preferred in one part of the sky and not in others.



Or the local ISM. You said yourself that dipoles are usually a sign of something changing at much greater scales that your observational volume. If your observational volume is the observable universe, you have something hard to explain. If it turns out that what you are seeing is nearby, it's much less hard to explain.

I think they've done a reasonable job of making sure that their result isn't equipment related, and that's important, because if it turns out that there is some unknown local ISM effect that changes quasar line widths in odd ways, that's still pretty interesting physics.

But none of this really explains the trend with redshift (time) that they also observe.
 
  • #54
twofish-quant said:
I think his last two paragraphs about it not mattering whether c or e is varying are incorrect.

The thing about c is that it's just a conversion factor with no real physical meaning. You can set c=1, and this is what most people do. e is the measured electrical charge of the electron and it does have a physical meaning. You'd have serious theoretical problems in GR if c were changing over time, but you wouldn't have any problems if e or h were, since GR doesn't know anything about electrons.

No, they're absolutely correct. See: http://arxiv.org/abs/hep-th/0208093 The constant e does not have a physical meaning. It is just a conversion factor between different systems of units. Just as some people work in units where c=1, some people work in units with e=1.
 
  • #55
matt.o said:
But none of this really explains the trend with redshift (time) that they also observe.

The effect they claim to have observed is only statistically significant if you assume that systematic errors are zero, assume that random errors are not underestimated, and average over a large number of data-points. Since systematic errors are never zero, and random errors are always underestimated, their effect is not significant. Since it's not statistically significant, it's pointless to speculate about whether it shows a trend as a function of some other variable like redshift.

cesiumfrog said:
Still, what such uncertainty would be explain why the data set from either telescope separately gives the same direction for the dipole? Do you think it is an artifact of the milky way?

And likewise, it's pointless to speculate about whether it shows a trend as a function of some other variable like direction on the celestial sphere.
 
  • #56
bcrowell said:
My research experience is in spectroscopy, and although it was a different kind of spectroscopy (gamma rays), I think it may help to give me a feel for what's going on here.

FYI, my background is supernova theory which gets me into a lot of different fields.

It is a *very* tricky business to do this kind of thing and determine the centroids of peaks with realistic estimates of one's random and systematic errors.

Also what happens with astrophysical objects is that the things happen at the source that cause the centroid to apparently shift. You could have something that suppresses emissions at one part of the line more strongly than the other part, and this will cause the line to shift. These are very tiny effects, but what they are measuring are also tiny effects.

These effects also don't have to be at the source. Suppose you have atmospheric or ISM reddening. This suppresses red frequencies more than blue ones, and this will cause your lines to shift.

One other thing that you have to be careful about is unconscious bias. If you have a noisy curve and you try to find the peak, it's actually quite hard, and if you have a human being in the process that knows which results are "better" it's easy to bias that way. It's not that you are consciously changing the results, but what happens is that you know that you want the curve to move in one way, so you subconsciously push things in one direction.

This is one thing that makes the SN Ia observations different. The evidence for the accelerating universe using SN Ia didn't rely on any high precision measurements. We don't completely understand what causes SN Ia's, and I would be very surprised if they actually released the same energy. However the effect of the universe accelerating was robust enough so that you could be off by say 20% or so, and that still wouldn't affect the conclusions. These uncertainties mean that past a certain point, SN Ia observations become increasingly useless as a standard candle, but the effect is big enough so that it doesn't matter. I remember that we had a discussion about this right after the results came out, and we figured out that even if the team had gotten a lot of things wrong, we were still seeing too large an effect.

It is very common in this business to find that results published by different people in different papers differ by a *lot* more than 1 sigma. The error bars that people publish are whatever the software tells them they are, but roughly speaking, I would triple those error bars because the software can't take into account the errors that result from all the imponderables described above.

And even then you haven't even begun to hit the systematic effects. The problem is that in order to even get from raw data to spectrum, you have to go through about a dozen data analysis steps.

One thing that gives me a good/bad feeling about a paper, is if the authors illustrate that they've done their homework. It may be that interstellar reddening doesn't bias the peaks at all, but it will take me a week to run through the calculations even if I had the data, and I've got my own stuff to do. The fact that I can think of a few systemic biases that the authors haven't addressed, makes me quite nervous.
 
  • #57
bcrowell said:
The constant e does not have a physical meaning. It is just a conversion factor between different systems of units. Just as some people work in units where c=1, some people work in units with e=1.

The paper that I'm reading has people arguing back and forth on this issue, and I suggest that we start another thread on this.
 
  • #58
bcrowell said:
The effect they claim to have observed is only statistically significant if you assume that systematic errors are zero, assume that random errors are not underestimated, and average over a large number of data-points.

And interestingly you only get a trend if you set \delta alpha to zero at the current time. The data that they have is consistent with a straight line with \delta alpha being non-zero at the current time (i.e. there is something systemic bias in their technique that causes all measurements of alpha to be off).
 
  • #59
twofish-quant said:
I go for data analysis error. The effects that they are looking for are extremely small, and there is enough uncertainty in quasar emission line production that I'm I don't think that has been ruled out right now.
I don't understand- AFAIK, they are looking at the relative spacing between metallic absorption lines. What are the sources of uncertainty in 'quasar emission line production'?

Edit: Actually, I'm not entirely clear about something- are they measuring the spectral lines from quasars, or are they using quasars as a continuum source and are measuring the absorption lines from intervening galaxies?

twofish-quant said:
The problem that I have is that any statistical error analysis simply will not catch systematic biases that you are not aware of, so while an statistical error analysis will tell you if you've done something wrong, it won't tell you that you've got everything right.

That's mostly true, however there are experimental techniques that can alleviate systematic bias: relative measurements instead of absolute measurements, for example. The radiometry group at NIST regularly puts out very good papers with excruciating error analysis and are a good source to understand how to carry out precision measurements. Other good discussions can be found in papers that compare standards (the standard kilogram, for example).
 
Last edited:
  • #60
twofish-quant said:
The paper that I'm reading has people arguing back and forth on this issue, and I suggest that we start another thread on this.

Yeah, it's controversial, but it's only controversial because the people on one side of the argument are wrong :-) The paper by Duff that I linked to, http://arxiv.org/abs/hep-th/0208093 , was rejected by Nature, and Duff discusses the referees' comments in an addendum to the paper. The referees were just plain wrong, IMNSHO.
 
  • #61
bcrowell said:
Take a look at figure 3 on p. 5 of this paper http://arxiv.org/abs/astro-ph/0306483 .

Thanks for the link- that figure does raise at least one important question: what are the constraints on the number of velocity components used to fit the data (which, I am assuming is the VPFIT program)? Clearly, increasing the number of velocity components will create a better fit. How did they choose the number of components, which is apparently allowed to vary from graph to graph? And what is the 'column density'?

Otherwise, the paper has quite a bit of detail regarding their data analysis, and answered one question: they are using quasars as sources, and measuring the absorption peaks from dust/stuff in between.
 
  • #62
bcrowell said:
Take a look at figure 3 on p. 5 of this paper http://arxiv.org/abs/astro-ph/0306483 .

I've (tried to) carefully read sections 1,3, and 5 of this, and I believe their conclusions are sound. Here's why:

Section 1.1.2: they outline the Many Multiplet method. AS best I can understand, they use two atomic species; Mg and Fe. The doublet spacings in Mg are not affected by variations in alpha, while the Fe transitions are. Additionally, Fe transitions at ~2500 A are affected uniformly (as opposed to, say Ni and the shorter Fe transition)- Fig 1. Thus, they have a system that (1) has a control, (2) has low variability, and (3) possesses the needed precision to measure small changes in alpha.

Section 3: they summarize the data analysis method (VPFIT). AFAICT, there's no obvious flaws. But there is some specialized information I am unfamiliar with- their choice of fitting parameters (column density is perhaps optical density?), so perhaps someone else can comment.

Section 5: Here is a detailed description of systematic error. For sure, they understand the optical measurement- use of relative shifts of lines coupled with laboratory calibration lines removes the overwhelming majority of instrument bias. I understand the gas dynamics less well, but section 5.2 appears reasonable (again, AFAICT- maybe someone else can comment)- they seem to have a consistent way to eliminate absorption artifacts. Although I did not understand 5.5, section 5.7 is, I think, a very important demonstration that their method is insensitive to many sources of systemic error.

I think this discussion will be more meaningful once the 'PRL paper' passes (or fails!) the review process.
 
  • #63
Andy Resnick said:
I don't understand- AFAIK, they are looking at the relative spacing between metallic absorption lines. What are the sources of uncertainty in 'quasar emission line production'?

Lots of things. In order to do calculations for where the lines should be you have to include a whole bunch of factors (density, temperature, magnetic polarizations). If you are wrong about any of those factors the lines move.

Edit: Actually, I'm not entirely clear about something- are they measuring the spectral lines from quasars, or are they using quasars as a continuum source and are measuring the absorption lines from intervening galaxies?

The are using continuum sources from quasars and getting absorption spectra from intervening galaxies.

That's mostly true, however there are experimental techniques that can alleviate systematic bias: relative measurements instead of absolute measurements, for example. The radiometry group at NIST regularly puts out very good papers with excruciating error analysis and are a good source to understand how to carry out precision measurements. Other good discussions can be found in papers that compare standards (the standard kilogram, for example).

I'll take a look at the papers.

The problem with a lot of experimental techniques to eliminate bias is that they are difficult to apply astrophysically. When you are doing a laboratory experiment you can control and change the environment in which you are doing the experiment. In most astrophysical measurements, you don't have any control over the sources that you are measuring, which means that one thing that you have to worry about that you don't have to worry about in laboratory experiments is some unknown factor that is messing up your results. This is a problem because usually there are two or three dozen *known* factors that will bias your data. Also, people are constantly discovering new effects that cause bias. As long as these are "outside the telescope" they can be astrophysically interesting.

Just to give an example of the problem. If you were doing some sort of precision laser experiment, you probably wouldn't do in a laboratory that was on a roller coaster in the middle of a forest fire putting out smoke and heat. In astrophysics, you have to do that because you don't have any choice. In some situations using relative measurements will make the problem worse, since you increase the chance that the known and unknown bias factors will mess up one of your measurements and not the other.

There are ways that astronomers use to work around the problem, but the authors of the Webb papers haven't been applying any of those, and they don't seem to be aware of the problem.
 
  • #64
twofish-quant said:
There are ways that astronomers use to work around the problem, but the authors of the Webb papers haven't been applying any of those, and they don't seem to be aware of the problem.
Please elucidate! Please describe the error-reducing analytical tools, and please show how they could have improved the science in the Webb papers.

Cosmology is a very loose "science". Observational astronomy is a whole lot more controlled, with accepted standards for data-acquisition and publication. If the data-points of observational astronomy can't be accommodated by cosmology without either tweaking parameters or introducing a new one (or two), perhaps we need to get a bit more open-minded regarding cosmology.

Every single cosmological model that we humans have devised has proven to be be wrong. Not only wrong, but REALLY wrong! Is the BB universe model right? I have no money on that horse!
 
  • #65
turbo-1 said:
Cosmology is a very loose "science".
Not is, used to be! In the last 15 years, it's become a high-precision science.

turbo-1 said:
Every single cosmological model that we humans have devised has proven to be be wrong. Not only wrong, but REALLY wrong! Is the BB universe model right? I have no money on that horse!
Before Lemaitre's cosmic egg, I wouldn't even dignify any thinking about cosmology with the term "model." Since then, things have just gotten more and more firmed up. It's been 40 years since the Hawking singularity theorem, which IMO pretty much proved that something like the BB happened (at least as far back as the time when the universe was at the Planck temperature).
 
  • #66
Andy Resnick said:
IFor sure, they understand the optical measurement- use of relative shifts of lines coupled with laboratory calibration lines removes the overwhelming majority of instrument bias. I understand the gas dynamics less well, but section 5.2 appears reasonable (again, AFAICT- maybe someone else can comment)

I don't know anything about the mechanics of gas dynamics. I do know something about quasar gas dynamics, and what they say doesn't make any since to me. Section 5.2 seems extremely *unreasonable* to me. They just assert that by removing certain spectra that fall into the Lyman-alpha they can deal with that, and that weak blends don't make a difference. I have no reason to believe that, and then don't present any reasons to make me change my mind on this. One problem is that when you look at these spectra, it's not obvious what the interloper is.

They do that elsewhere: That assert in italics that "the large-scale-scale properties of the absorbing gas have no influence on estimates of delta alpha" and they've given me no reason to believe this. I don't understand how finding agreement between the redshifts of invididual velocity components rules this out.

Figure 6 also looks very suspicion to me. It looks consistent to a line showing no change in alpha a but a constant shift that is due to experiment error.

I should point out that a lot of the limits that they have are because of astrophysics. They are doing the best that they can do with the data that they have.

they seem to have a consistent way to eliminate absorption artifacts. Although I did not understand 5.5, section 5.7 is, I think, a very important demonstration that their method is insensitive to many sources of systemic error.

Yes, but there are quite a few systematic errors sources that don't get removed.

The thing that makes me doubt the Webb paper, is that if he is right then the half a dozen or so papers that claim non change in fine-structure constant are wrong. So in trying to figure out what is going on, it's necessary to look not just at Webb's papers, but the papers that contradict his results. Webb is the *ONLY* group that I know of that has found a change in fine structure constant over time.
 
  • #67
turbo-1 said:
Please elucidate! Please describe the error-reducing analytical tools, and please show how they could have improved the science in the Webb papers.

It's really quite simple. You have different groups do different experiments with different techniques, and if you have independent techniques that point to a change in the fine structure constant, then that's the most likely explanation for the results.

What really improves the papers is if you refer to other papers by other groups using different techniques and then you find the wholes and go further. A change in the fine-structure constant ought to *LOTS* of things to change, and you look for the changes in the various things.

Cosmology is a very loose "science".

Once you get past one second post BB, it isn't. For pre-one second, you can make up anything. Once you got past one second, then there's not that much you can do to change the physics.

If the data-points of observational astronomy can't be accommodated by cosmology without either tweaking parameters or introducing a new one (or two), perhaps we need to get a bit more open-minded regarding cosmology.

I'm not sure what your point is. I have absolutely no theoretical reason to be against a varying fine structure constant either in space or time. The reason I am skeptical about Webb's results are 1) no other group has reported their findings 2) if the fine structure constant is changing, you ought to see it in various multiple independent tests and 3) some of this findings "smell" like observational error (large scale dipoles).

Every single cosmological model that we humans have devised has proven to be be wrong. Not only wrong, but REALLY wrong! Is the BB universe model right? I have no money on that horse!

Since Webb's results involve z=1 to z=3, I have no idea what any of this has to do with the Big Bang. Whether the fine structure constant is changing or not is pretty much independent of big bang cosmology.
 
  • #68
bcrowell said:
It's been 40 years since the Hawking singularity theorem, which IMO pretty much proved that something like the BB happened (at least as far back as the time when the universe was at the Planck temperature).

In any case this is another thread. Since it's more or less irrelevant to Webb's findings.
 
  • #69
Also it's worth noting that if Webb-2010 is correct than Webb-2003 is wrong. What Webb found in 2010 was that in some parts of the sky the fine structure constant appears to be increasing over time, and in other parts the fine structure constant appears to be decreasing.

The type of systemic bias that I'm thinking he may be looking at is something either in the ISM or IGM that causes *all* of the measured alphas to shift by some constant amount depending on what part of the sky that you look at. One thing about the graphs that I've seen is that they all end at z=1 and it's assumed that z=0 at 0, but there is no reason to think that this is the situation from the data.

What I'd like to see them do is to apply their technique to some nebula within the Local Group. If my hypothesis is right and it turns out there is some experimental issue when you apply the technique to some nearby nebula, then you should see a calculated alpha that is different from the accepted current value.
 
  • #70
It should be pointed out that Webb's group is only one of several groups that are looking at a time variation of alpha, and they've made the news because they are the only group that has reported a non-null result. If anyone other than their group reports a non-null result that would be interesting, and if the report the *same* non-null result that would be really interesting.

Maybe it's just me. If I get a result from a telescope saying that the fine structure constant is increasing over time, and then another result from a different telescope saying that the fine structure constant is decreasing over time, then my first reaction would be that I've done something experimentally wrong rather than claiming that the fine structure constant is different in different directions.

Going into 1008.3907v1 I see more and more problems the more I look.

One problem that I see in their Fig2 and Fig3,that they don't separate out the keck observations from the VLT ones. The alternative hypothesis would be that there is some systemic issue with the data analysis, and the supposed dipole just comes from the fact that Keck has more observations in one part of the sky and VLT has more observations in another.

Something else that smells really suspicious is that the pole of the dipole happens to be in an area where the observations aren't. The reason this is odd is that you are much less likely to mistake a dipole if you take observations at the pole. If you take measurements at the equator of the dipole, what you get are measurements near zero, and any sort of noise that gets you a slope will give you a false dipole reading. If your measurements are near the pole of the dipole, then your signal is going to be a lot stronger, and you'll see a rise and fall near the dipole which is not easily reproduceable by noise.

So it is quite weird that the universe happens to select the pole of the dipole exactly a in spot where there are no observations from either Keck or VLT, and that the equator of the dipole just happens to neatly split their data into two parts, and that the orientation of the dipole happens to be where it would be if it were experimental noise.

Something else that I find **really** interesting is that the equator of the their dipole happens to pretty closely match the ecliptic. The belt of their dipole is hitting the celestial equator at pretty close to 0h and 12h, and the tilt of the data is pretty close to the tilt of the Earth's polar axis. So what the data is saying is that the fine structure constant happens to be varying in a way that just matches the orbit of the earth. You then have to ask, what's weird about the earth, and one thing that is odd about the planet Earth is that's where you are taking your measurements from.

What bothers me more than the fact that the equator of the dipole matches the ecliptic is the fact that they didn't notice it. That's a pretty basic thing to miss.

I should point out that there is every astronomers nightmare is what happened to a group in the mid-1990's. They had to retract a paper claiming to discover pulsar planets because they didn't take into account the eccentricity of the earth. They didn't fair too badly because it was they themselves that withdrew the paper once they did some more measurements that started to look more and more suspicious as time past. Still it's something people want to avoid.
 

Similar threads

Back
Top