What are the latest exciting results from WMAP's three-year data release?

  • Thread starter SpaceTiger
  • Start date
  • Tags
    Universe
In summary: Other big news is that the primordial power spectrum is more clearly not consistent with scale-invariance. This means, basically, that we have confirmed another prediction of inflation. I may be ignorant but I thought scale invariance was a prediction of inflation?
  • #36
SpaceTiger said:
That analogy doesn't make any sense, as best I can tell. What the WMAP team is basically saying is that the standard model is consistent with data, based on solid observational evidence. You can't rephrase that to say that there is no evidence for the standard model.
The point about the BSE analogy was to show that two equally scientific answers can be valid even though they have the opposite effect. The answer depended on the question asked.
And as I've already said, the standards for a posteriori statistics are usually much higher than three-sigma. If the WMAP team acknowledged those results as indicating the need for a new theory, it would be far more irresponsible than what they did say -- more evidence is required. If you ask me, there's a great deal more bias in your judgement on this issue than theirs. You have a specific theory you're trying to hawk, they do not.
That final statement is a bit loaded, I could respond by saying they are 'hawking' GR, but I will not. These theories, and any others, will stand or fall on experimental verification and falsification, there is no need to 'hawk' them. My point is that here is some interesting observations that should be discussed.

There is a desire for a certainty that statistical evidence, such as from the analysis of the WMAP data, cannot bear.

The legitimate requirement for high-sigma verification of a statement is a desire to avoid false positives, however it has the inevitable consequence of increasing the chance of false negatives.

We just need to be aware of that fact.

Garth
 
Last edited:
Space news on Phys.org
  • #37
Garth said:
The point about the BSE analogy was to show that two equally scientific answers can be valid even though they have the opposite effect. The answer depended on the question asked.

The analogy seems to imply that the question the WMAP team has chosen to answer is somehow deceiving the public, yet the question you have chosen to ask:

"Are the positions of the low-l mode anisotropies consistent with non-random alignment?"

will always be answered in the positive, regardless of the results. This seems to me much more deceptive. I don't disagree with you that it's important what question we ask, but that argument seems to weaken your own position, not that of the WMAP team.


I could respond by saying they are 'hawking' GR, but I will not.

But that is not because they have vested interest in GR, it is because GR has been successfully tested on numerous occasions. It is of little concern to David Spergel (for example) whether or not GR is the correct theory of gravity, he's just testing the theories that are of the most interest to the scientific community. You, however, obviously have a lot to lose or gain from the success of your own theory. I think your "vested interest" argument is also coming back to bite you in the butt.


The legitimate requirement for high-sigma verification of a statement is a desire to avoid false positives, however it has the inevitable consequence of increasing the chance of false negatives.

We just need to be aware of that fact.

Their position is that more evidence is required, which seems to be taking that fact fully into account. Perhaps you should be more clear on your position and why you think it's superior.
 
  • #38
SpaceTiger said:
Their position is that more evidence is required, which seems to be taking that fact fully into account. Perhaps you should be more clear on your position and why you think it's superior.
My point is, as far as testing for the existence/non-existence of the AoE is concerned, given the context of observations that "If we were eager to claim evidence of strong non-Gaussianity, we could quote the probability of this occurring randomly as less than 2%.", the consequence of requiring more evidence to reduce the chance of a false positive (if it really doesn't exist) also increases the chance of a false negative (if it really does exist).

Garth
 
Last edited:
  • #39
Interesting paper on the a of e.

http://arxiv.org/PS_cache/astro-ph/pdf/0502/0502237.pdf

Authors: Kate Land, Joao Magueijo
Comments: Small corrections introduced
Report-no: Imperial-TP
Journal-ref: Phys.Rev.Lett. 95 (2005) 071301

We examine previous claims for a preferred axis at $(b,l)\approx (60,-100)$ in the cosmic radiation anisotropy, by generalizing the concept of multipole planarity to any shape preference (a concept we define mathematically). Contrary to earlier claims, we find that the amount of power concentrated in planar modes for $\ell=2,3$ is not inconsistent with isotropy and Gaussianity. The multipoles' alignment, however, is indeed anomalous, and extends up to $\ell=5$ rejecting statistical isotropy with a probability in excess of 99.9%. There is also an uncanny correlation of azimuthal phases between $\ell=3$ and $\ell=5$. We are unable to blame these effects on foreground contamination or large-scale systematic errors. We show how this reappraisal may be crucial in identifying the theoretical model behind the anomaly.
 
Last edited by a moderator:
  • #40
Garth said:
My point is, as far as testing for the existence/non-existence of the AoE is concerned, given the context of observations that "If we were eager to claim evidence of strong non-Gaussianity, we could quote the probability of this occurring randomly as less than 2%.", the consequence of requiring more evidence to reduce the chance of a false positive also increases the chance of a false negative.

It's not a "negative" result, it's an "inconclusive" result. They're suggesting that more evidence is required to reach a conclusion, not that they will reach the opposite conclusion until that evidence is acquired. This is why citing the axis of evil is such a poor way to approach the problem, because it doesn't, by itself, give useful information.

The approach we take to scientific problems, particularly theoretical ones, is very important. I tend to think of three types:

Worst approach: Scour observational data for something that looks unusual and then make a lot of noise about it. Quote the most dramatic a posteriori probabilities you can compute.

Bad approach: Look for something unusual in the data (or something you find philosophically disturbing) and make a theory such that it can be explained. Pay no heed to the testability of your theory.

Good approach: Learn as much as you can about the observational evidence available, look for statistically significant deviations from standard theory, and try to concoct a testable alternative than can explain at least two separate phenomena.

The first approach is just useless, IMO, and the second approach is extremely unlikely to succeed. If we want to have productive discussions about a scientific problem, I think it's always best to focus on theories that have taken the third approach. Depending on who's discussing it, the "axis of evil" falls into either the first or second category. I don't think it should be forgotten or ignored, but I don't see that there's much to be learned from it at the moment. If we find further deviations from standard theory, particularly on that scale, then it may evolve into a more powerful line of evidence against the standard model of cosmology.
 
  • #41
ST, then we agree on the 'good approach'.

However, I understand it to be the case that in accordance with the first half of that strategy:
Learn as much as you can about the observational evidence available, look for statistically significant deviations from standard theory
those deviations are already statistically significant.

wolram thank you, I was already aware of that 2005 Land & Magueijo paper and their conclusion that
The multipoles' alignment, however, is indeed anomalous, and extends up to [itex]\ell=5[/itex] rejecting statistical isotropy with a probability in excess of 99.9%. There is also an uncanny correlation of azimuthal phases between [itex]\ell=3[/itex] and [itex]\ell=5[/itex].
However, in this discussion I wanted to work with the more recent, weaker and less controvertable conclusions of WMAP3:
the probability of this occurring randomly as less than 2%
Of course these two statements are not inconsistent with each other.

One problem of course is that, because these low-l modes are relatively few in number, and they are not point sources like stars so their positions cannot be determined as accurately, then "the probability of this occurring randomly as less than 2%" may be all that will ever be statistically inferable. Nevertheless, this is still noticeably significant beyond the 95% confidence level.
[EDIT]
As a 'gedankenexperiment', and for the sake of argument assume that this WMAP3 conclusion is all that we will ever be able to say about it.

On the one hand, if it is maintained that "even more compelling evidence is required" for the existence of the AoE to be confirmed, is there not a large chance (>98%) of making a false negative?

Or on the other hand, if it is maintained that the above evidence is sufficient for the existence of the AoE to be confirmed, is there not only a small chance (<2%) of making a false positive?

On the balance of probabilities which is the prudent response? Perhaps the present result is not as "inconclusive" as the Spergel WMAP3 paper makes out?

Garth
 
Last edited:
  • #42
Garth said:
On the one hand assume the AoB does not exist. Will we not then have a small chance (<2%) of making a false positive.

Or, on the other hand assume the AoB does exist. Will we not then have a large chance (>98%) of making a false negative?

This is completely wrong. Didn't we just agree that these a posteriori statistics are not reliable?
 
Last edited:
  • #43
SpaceTiger said:
This is completely wrong. Didn't we just agree that these a posteriori statistics and not reliable?
Sorry, you caught me in the middle of an edit when my computer went down. I have now been able to rephrase the latter part of my argument in the way I want it.

We agreed that a posteriori statistics are less reliable, but it does depend on the actual probabilites and the structure within the alignments. Whether they reject statistical isotropy with a probability in excess of 99.9% or only 98% confidence level, these are formiable odds to explain as a statistical 'fluke'.

I am not alone in thinking that there is something there!
On the large-angle anomalies of the microwave sky

Garth
 
Last edited:
  • #44
Garth said:
On the one hand, if it is maintained that "even more compelling evidence is required" for the existence of the AoE to be confirmed, is there not a large chance (>98%) of making a false negative?

Or on the other hand, if it is maintained that the above evidence is sufficient for the existence of the AoE to be confirmed, is there not only a small chance (<2%) of making a false positive?

No, if I'm understanding what you mean by "false positive" and "false negative", that's still incorrect. The statistics refer to the probability of this occurring in a hypothetical random generation of the CMB (with the same power spectrum). They don't, however, give the probability that the feature is real because they don't (and can't) consider the selection bias.
I am not alone in thinking that there is something there!

Certainly not. This has been circulating in what I would call the semi-mainstream. A few theorists have jumped on it in the hopes that it will turn out to be significant, but the overwhelming majority (in my experience) still view it as insufficient evidence for anything useful.
 
Last edited:
  • #45
SpaceTiger said:
No, if I'm understanding what you mean by "false positive" and "false negative", that's still incorrect. The statistics refer to the probability of this occurring in a hypothetical random generation of the CMB (with the same power spectrum). They don't, however, give the probability that the feature is real because they don't (and can't) consider the selection bias.
Ah - the selection bias!

Thank you ST for an informative discussion!

Garth
 
  • #46
SpaceTiger said:
No, if I'm understanding what you mean by "false positive" and "false negative", that's still incorrect. The statistics refer to the probability of this occurring in a hypothetical random generation of the CMB (with the same power spectrum). They don't, however, give the probability that the feature is real because they don't (and can't) consider the selection bias.
ST, for clarity let me expand on my gedankenexperiment and see where we differ.

For a statistical experiment we envisage an ensemble of say 200 separate and independent universes, each with a CBM with anisotropic fluctuations similar to ours and in which one intelligent species has made similar observations as WMAP3 of their CMB.

The null hypothesis to be tested is the CMB fluctuations are all random, that they are Gaussian at all modes in the power spectrum.

In 100 of these universes (sub set A) the anisotropies are completely random, in the other 100 (sub set B) there is a deficiency in the low-l modes and a real AoE caused by some unknown non-cosmological process. The resultant power spectrums of all universes are similar.

In sub-set A most CMB anisotropies look completely random to the inhabitants of the respective universes, however in 2 of these universes there is a statistical quirk and the low-l modes appear aligned in an 'AoE'.

In sub-set B the low-l modes of all the CMB anisotropies appear aligned in an 'AoE'.

In A 98 species do not observe an alignment and consider their CMB Gaussian and they all are correct, but 2 do observe an alignment and aren't sure.

Of these 2, if they both maintain that "even more compelling evidence is required" for the existence of the AoE to be confirmed, i.e. the null hypothesis is true, they will be correct. Or on the other hand, if they both maintain that the evidence is sufficient for the existence of the AoE to be confirmed, i.e. the null hypothesis is false, they are mistaken.

In B all 100 aren't sure. If they each maintain that "even more compelling evidence is required" for the existence of the AoE to be confirmed, i.e. the null hypothesis is true, they all will be incorrect. Or on the other hand, if they each maintain that the evidence is sufficient for the existence of the AoE to be confirmed, i.e. the null hypothesis is false, they all are correct.

Now we are in the group of 102 that do observe an apparent low-l mode alignment.

Of those 102:

If they each maintain that "even more compelling evidence is required" for the existence of the AoE to be confirmed, 2 will be correct and 100 will be incorrect.

However, if they each maintain that the evidence is sufficient for the existence of the AoE to be confirmed, then 2 will be incorrect and 100 correct.

My preference is for the stratergy that has the greatest chance of giving the correct answer, given that an apparent AoE has been observed in our sky.

I will be interested to see where I am mistaken in my thinking.

Garth
 
Last edited:
  • #47
Garth said:
In sub-set A most CMB anisotropies look completely random to the inhabitants of the respective universes, however in 2 of these universes there is a statistical quirk and the low-l modes appear aligned in an 'AoE'.

The problem is here. The statistics aren't saying that two of these universes would appear to be aligned in an 'AoE', they are saying that only two of them will appear to have an axis with these properties (along the ecliptic plane). The real question we're interested in here is not the probability that the multipoles will be aligned to the ecliptic plane, but the probability that the standard model is wrong about the low multipoles.

To attempt to answer this, we might come up with another thought experiment. Let's say, hypothetically, that the standard model is right and we generate 100 random universes, as in your prescription. Now, let's ask the question, what is the probability that, after looking at the low multipoles, someone will notice something in that data that's seemingly inconsistent with the standard model. We could start by just looking at all possible alignments -- the ecliptic plane, the galactic plane, the supergalactic plane, Earth's axis of rotation -- I could go on, but let's stop there for now. Let's say (rather arbitarily) that there is also a 2% chance of notable alignment with any of these axes. That brings us up to 8 universes.

What about them? In 8 of these universes, someone will have noticed an alignment that they felt brought the standard model into question. But why should we stop at alignments? Perhaps we should also consider anti-alignments -- now we're up to 16 universes. But wait, what about preferred axes in the instrument itself? 20 universes? Perhaps they would have brought it up at less significance -- 30 universes?

So how many universes have apparent discrepancies with the standard model? I don't know, nobody does. That's the problem. There's just no way to compute these probabilities because there's no way to know what astronomers would have noticed in these hypothetical universes. What makes things worse is that the people who found the axis of evil weren't looking for it where it was -- they were looking for signs of alignment with the galactic and supergalactic planes. This makes the argument even more a posteriori.

How could we get around this problem? Well, at the moment, it's awfully hard. If, based on other compelling evidence, someone had concocted a self-consistent model of the universe that predicted the measurements of the low-l multipoles to give low power, the arguments would be a lot more convincing. If, after seeing the low-l modes of the power spectrum, someone had made up a theory to explain it and immediately checked for the axis of evil where it was, that would also be more convincing.

Given that neither of these things happened, however, we're in a tougher position. I agree with what the WMAP folks said -- more compelling evidence is required.
 
  • #48
If we are looking for arbitrary alignments, such as say the three stars of Orion's belt, the clue indicating that they are random is the fact that there are about 2,000 naked eye stars that are not aligned. Even restricting ourselves to stars as bright as the belt there are many 100's non-aligned stars. With the quadrupole and octupole alignments all the multipole vectors are part of the alignment.

The question of the direction of the alignment perceived a posteriori becomes significant if a reasonable cause could be identified that would produce such an alignment. Land & Magueijo:
The axis of evil
It has been suggested that a preferred direction in CMB fluctuations may signal a non-trivial cosmic topology (e.g. [1, 12, 13, 14]), a matter currently far from settled. The preferred axis could also be the result of anisotropic expansion, possibly due to strings, walls or magnetic fields [15], or even the result of an intrinsically inhomogeneous Universe [16]. Such claims remain controversial; more mundanely the observed “axis of evil” could be the result of galactic foreground contamination or large scale unsubtracted systematics (see [17, 18, 19, 20] for past examples).
Also they report structure in the alignments:
There is also an uncanny correlation of azimuthal phases between ℓ = 3 and ℓ = 5.

Also Chris Vale's LOCAL PANCAKE DEFEATS AXIS OF EVIL provides an enticing possibility, that is a lensing of the CMB dipole by the Solar system moving relative to a local mass.


Garth
 
Last edited:
  • #49
Garth said:
The question of the direction of the alignment perceived a posteriori becomes significant if a reasonable cause could be identified that would produce such an alignment.

But that's just the point of my selections (galactic plane, supergalactic plane, etc.), they are all planes of symmetry along which we might expect contamination. If I truly wanted to be arbitrary, I had a limitless number of planes from which to choose.

The fact that there exist multiple plausible reasons for the alignment should be another clue. If there was one glaring possibility that stood up above the rest, that would lend weight to the significance of the "axis", but all these possible causes indicates a large theoretical degeneracy and a large space of potential alignments that would be deemed significant.
 
  • #50
Sorry to take introduce a new element into this thread, but the 3-year WMAP results are just so rich.

I have several questions, to anyone interested in answering:
  • to what extent can the Planck mission be tweaked, to take account of the WMAP results?
  • ditto, other CMB projects?
  • What are the CMB projects, already under-way or in an advanced stage of planning, that will reduce the error bars in the high-l modes?
  • To what extent can/has the WMAP results significantly advanced our understanding of (local) galactic foregrounds - dust, gas, free-free transitions, ...? extra-galactic foregrounds - Local Group dust (for example), dust (etc) in the LMC, SMC, M31, ...?
  • What do these results have to say about the ISW?
  • There are some ~300 point sources in these data, up from ~200 in the year-1 data. How consistent are these (observed) (extragalactic) point sources with the (observed - SDSS/2dF etc) P(k)?
 
  • #51
Nereid said:
Sorry to take introduce a new element into this thread, but the 3-year WMAP results are just so rich.

There are some ~300 point sources in these data, up from ~200 in the year-1 data. How consistent are these (observed) (extragalactic) point sources with the (observed - SDSS/2dF etc) P(k)?

Hi Nereid! Yes, we have been rather hogging the discussion!

I think that the large-l modes are interesting. Whereas the WMAP2 power spectrum indicated the rise to the third peak it did not continue far enough to mark that peak, WMAP3 does continue into
l > 800 yet does not show the peak at all, its error bars are too large and even then do not cross the predicted curve. What is there seems to 'plateau out'. WMAP has a noise problem at the high-l end. (Hinshaw et al. http://lambda.gsfc.nasa.gov/product/map/dr2/pub_papers/threeyear/temperature/wmap_3yr_temp.pdf page 75.)

That third peak, important to determine [itex]\Omega_b[/itex], has to be determined by other experiments: Acbar, Boomerang, CBI, VSA.

Garth
 
Last edited by a moderator:
  • #52
The Planck mission was almost certainly designed to look for things we expect from the standard model and, since the standard model hasn't been called into question by WMAP, I wouldn't expect a shift in the Planck design. They're primarily planning to look at angular scales of l < 2000, if I remember correctly, and the interesting range will be between 1000 and 2000, where WMAP hasn't covered. It may seem like a small range, but it's about a million modes on the sky, so there's much to be learned.

If the standard model holds, there aren't a lot of new results to be garnered from the primary anisotropies -- smaller error bars and a possible detection of B mode polarization from gravitational waves. Some of the most interesting results ought to come out of the secondary anisotropies, which include the Sunyaev-Zeldovich effect and extragalactic sources.

There are a lot of WMAP results concerning the galaxy and dust, but I haven't had time to review them. Dr. Spergel gave a talk about it this past Wednesday and it seemed that one of the main results was the confirmation of emission from spinning dust grains.
 
  • #53
I think the thrid peak in the power spectrum has been nailed down in the WMAP release, and is well explained by the LCDM model. There will be several more papers on this in the year to come . . . IMO.
 
  • #54
Bernui, Mota, Reboucas, & Tavakol have today updated their paper
Mapping the large-scale anisotropy in the WMAP data to include WMAP 3 data.

They previously had described another method of measuring large-scale anisotrophies:
Introduction

Here we propose a new indicator, based on the angular pair separation histogram (PASH) method [20], as a measure of large-scale anisotropy. An important feature of this indicator is that it can be used to generate a sky map of large-scale anisotropies in a given CMB temperature fluctuations map. This level of directional detail may also provide a possible additional window into their causes.
With the result:
Conclusions

We have proposed a new method of directionally measuring deviations from statistical isotropy in the CMB sky, in order to study the possible presence and nature of large-scale anisotropy in the WMAP data.
The use of our anisotropy indicator has enabled us to construct a map of statistical deviations from isotropy for the CMB data. Using this σ–map we have been able to find evidence for a large-scale anisotropy in the WMAP CMB temperature field. In particular we have found, with high statistical significance (> 95% CL), a small region in the celestial sphere with very high values of σ, which defines a direction very close to the one reported recently [6, 10].
This result persists even after attempts to explain it away as an artefact of the data processing or foreground cleansing procedures:
We have shown that the results reported here are robust, by showing that the σ–map does not significantly change by changing various parameters employed in its calculation. We have also studied the effects of different foreground cleaning algorithms, or absence thereof, by considering in addition to LILC also the TOH and WMAP CO-ADDED maps. We have found again that the corresponding σ–maps remain qualitatively unchanged. In particular the hot spot on the south-eastern corner of the σ–map remains essentially invariant for all the maps considered here. This robustness demonstrates that our indicator is well suited to the study of anisotropies in the CMB data.
Now that result has been preliminarly updated by the WMAP3 data:
Finally, we add that after our paper was submitted, the three year WMAP CMB data was released [28]. As a preliminary check, we have calculated the σ−map for the new three year WMAP CO-ADDED map, which is depicted in Fig. 6. As can be seen the hot spot found in the first year σ−map in the south eastern corner of the sky, remains qualitatively unchanged with an axis also in agreement with that found for the first year data. In this way our results are also robust with respect for the three year WMAP CMB data. A complete and detailed analysis of the three year WMAP data using our indicator will be presented elsewhere.

There are three questions to ask:
1. "Is the distribution of anisotropies in the WMAP data non-Gaussian?",
2. "Is there an alignment in the non-Gaussianity?" and
3. "Is any such alignment identifable with local geometry, such as motion through the CMB, the galactic plane etc.?"

The interpretation of the statistical significance of the result depends on the question asked.

Garth
 
Last edited:
  • #55
Garth said:
The interpretation of the statistical significance of the result depends on the question asked.

Garth

If you would look at the animation of figure 3 at:
http://www.physics.nmt.edu/~dynamo/PJRX/Results.html
and tried to see it as a possible crude representation of the current cosmological situation, while seeing our streaming galactic cluster somewhat left of center, I think you could see something akin to the observed dipole/octipole structure.

Could the all-matter, observed universe be seen as a pulse/jet and not as an isometric/homogeneous expansion?
aguy2
 
  • #56
aguy2 said:
If you would look at the animation of figure 3 at:
http://www.physics.nmt.edu/~dynamo/PJRX/Results.html
and tried to see it as a possible crude representation of the current cosmological situation, while seeing our streaming galactic cluster somewhat left of center, I think you could see something akin to the observed dipole/octipole structure.

Could the all-matter, observed universe be seen as a pulse/jet and not as an isometric/homogeneous expansion?
aguy2
The short answer is no! I find it hard to envisage what geometry your suggested setup is meant to have.

The anisotropies we are talking about are at 10-5, apart from the dipole, caused by our motion wrt the surface of last scattering, the CMB is remarkably isotropic.

That animation had a vague resemblance to the quadrupole/octopole distribution across the sky but that is all.

Garth
 
  • #57
I see problems with Bernui et al.
 
  • #58
Chronos said:
I see problems with Bernui et al.
Such as?

Garth
 
  • #59
A new paper by Copi, Huterer, Schwarz and Starkmanon the subject of low-l mode correlations: The Uncorrelated Universe: Statistical Anisotropy and the Vanishing Angular Correlation Function in WMAP Years 1-3
We have shown that the ILC123 map, a full sky map derived from the first three years of WMAP data like its predecessors the ILC1, TOH1 and LILC1 maps, derived from the first year of WMAP data, shows statistically significant deviations from the expected Gaussian-random, statistically isotropic sky with a generic inflationary spectrum of perturbations. In particular: there is a dramatic lack of angular correlations at angles greater than sixty degrees; the octopole is quite planar with the three octopole planes aligning with the quadrupole plane; these planes are perpendicular to the ecliptic plane (albeit at reduced significance than in the first-year full-sky maps); the ecliptic plane neatly separates two extrema of the combined ℓ = 2 and ℓ = 3 map, with the strongest extrema to the south of the ecliptic and the weaker extrema to the north.
We have discussed before whether these alignments are just a statistical 'fluke' or whether "more compelling evidence" is required before it is acknowledged that all is not well with the standard [itex]\Lambda CDM[/itex] model interpretation of the WMAP data.

The authors go on:
The probability that each of these would happen by chance are 0.03% (quoting the cut-sky ILC123 S1/2 probability), 0.4%, 10%, and < 5%. As they are all independent and all involve primarily the quadrupole and octopole, they represent a ~10−8 probability chance “fluke” in the two largest scale modes. To quote [7]: We find it hard to believe that these correlations are just statistical fluctuations around standard inflationary cosmology’s prediction of statistically isotropic Gaussian random aℓm [with a nearly scale-free primordial spectrum].

What explanations may there be?
What are the consequences and possible explanations of these correlations? There are several options — they are statistical flukes, they are cosmological in origin, they are due to improper subtraction of known foregrounds, they are due to a previously unexpected foreground, or they are due to WMAP systematics.
How do the authors assess these explanations?
As remarked above it is difficult for us to accept the occurrence of a 10−8 unlikely event as a scientific explanation.
1.
A cosmological mechanism could possibly explain the weakness of large angle correlations, and the alignment of the quadrupole and octopole to each other. A cosmological explanation must ignore the observed correlations to the solar system, as there is no chance that the universe knows about the orientation of the solar system nor vice-versa. These latter correlations are unlikely at the level of less than 1 in 200 (plus an additional independent ≈ 1/10 unlikely correlation with the dipole which we have ignored). This possibility seems to us contrived and suggests to us that explanations which do not account for the connection to solar system geometry should be viewed with considerable skepticism.
In [16], we showed that the known Galactic foregrounds possesses a multipole vector structure wholly dissimilar to those of the observed quadrupole and octopole. This argues strongly against any explanation of the observed quadrupole and octopole in terms of these known Galactic foregrounds.
2.
A number of authors have attempted to explain the observed quadrupole-octopole correlations in terms of a new foreground [51–56]. (Some of these also attempted to explain the absence of large angle correlations, for which there are also other proffered explanations [57–61].) Only one of the proposals ([53]) can possibly explain the ecliptic correlations, as all the others are extragalactic. Some do claim to explain the less-significant dipole correlations. Difficulties with individual mechanisms have been discussed by several authors [56, 62–65] (sometimes before the corresponding proposal). Unfortunately, in each and every case, among possible other deficiencies, the pattern of fluctuations proposed is inconsistent with the one observed on the sky. As remarked above, the quadrupole of the sky is nearly pure Y22 in the frame where the z-axis is parallel to ˆ w(2,1,2) (or any nearly equivalent direction), while the octopole is dominantly Y33 in the same frame. Mechanisms which produce an alteration of the microwave signal from a relatively small patch of sky—and all of the above proposals fall into this class — are most likely to produce aligned Y20 and Y30. (This is because if there is only one preferred direction, then the multipole vectors of the affected multipoles will all be parallel to each other, leading to a Yℓ0.) The authors of [55] manage to ameliorate the situation slightly by constructing a distorted patch, leading to an underpowered Y33, but still a pure Y20. The second shortcoming of all explanations where contaminating effect is effectively added on top of intrinsic CMB temperature is that chance cancellation is typically required to produce the low power at large scales, or else the intrinsic CMB happens to have even less power than what we observe. Likelihood therefore disfavors all additive explanations [56] (unless the explanation helps significantly with some aspect of structure seen at higher ℓ).
So:
Explaining the observed correlations in terms of foregrounds is difficult. The combined quadrupole and octopole map suggests a foreground source which form a plane perpendicular to the ecliptic. It is clear neither how to form such a plane, nor how it could have escaped detection by other means. This planar configuration means that single anomalous hot or cold spots do not provide an adequate explanation for the observed effects.
3.
The final possibility is that systematic effects remain in the analysis of the WMAP data.
The consequences if indeed these correlations are real?
If indeed the observed ℓ = 2 and 3 CMB fluctuations are not cosmological, there are important consequences. Certainly, one must reconsider [7] all CMB results that rely on low ℓs, including the optical depth, τ, to the last scattering surface; the inferred redshift of reionization; the normalization, A, of the primordial fluctuations; σ8, the rms mass fluctuation amplitude in spheres of size 8h−1Mpc; and possibly the running dns/d logk of the spectral index of scalar perturbations (which, as noted in [68], depended in WMAP1 on the absence of low-ℓ TT power).
Of even more fundamental long-term importance to cosmology, a non-cosmological origin for the currently observed low-ℓ microwave background fluctuations is likely to imply further-reduced correlation at large angles in the true CMB. As shown in Section 3, angular correlations are already suppressed compared to [itex]\Lambda[/itex]CDM at scales greater than 60 degrees at between 99.85% and 99.97% C.L. (with the latter value being the one appropriate to the cut sky ILC123). This result is more significantin the year 123 data than in the year 1 data. The less correlation there is at large angles, the poorer the agreement of the observations with the predictions of generic inflation. This implies, with increasing confidence, that either we must adopt an even more contrived model of inflation, or seek other explanations for at least some of our cosmological conundrums. Moreover, any analysis of the likelihood of the observed “low-ℓ anomaly” that relies only on the (low) value of C2 (especially the MLEinferred) should be questioned. According to inflation C2, C3 and C4 should be independent variables, but the vanishing of C(θ) at large angles suggests that the different low-ℓ Cℓ are not independent.
And what of the standard model's interpretation of the data?
This does not seem reasonable to us — that one starts with data that has very low correlations at large angles, synthesizes that data, corrects for systematics and foregrounds and then concludes that the underlying cosmological data is much more correlated than the observations — in other words that there is a conspiracy of systematics and foreground to cancel the true cosmological correlations.
This strongly suggests to us that there remain serious issues relating to the failure of statistical isotropy that are permeating the map making, as well as the extraction of low-ℓ Cℓ.
At the moment it is difficult to construct a single coherent narrative of the low ℓ microwave background observations. What is clear is that, despite the work that remains to be done understanding the origin of the observed statistically anisotropic microwave fluctuations, there are problems looming at large angles for standard inflationary cosmology.
So it is the standard model that is a conspiracy theory!:wink:

As I have said several times, we must not forget the interpretation of the precise WMAP data is model dependent and that model is looking more problematic as time goes on...

Garth
 
Last edited:
  • #60
A paper today Anomalies in the low CMB multipoles and extended foregrounds that explains the low-l mode anomalies by an extended forground centred on the Local Supercluster (LSC).
We discuss how an extended foreground of the cosmic microwave background (CMB) can account for the anomalies in the low multipoles of the CMB anisotropies. The distortion needed to account for the anomalies is consistent with a cold spot with the spatial geometry of the Local Supercluster (LSC) and a temperature quadrupole of order DeltaT_2^2 ~ 50 microK^2. If this hypothetic foreground is subtracted from the CMB data, the amplitude of the quadrupole (l=2) is substantially increased, and the statistically improbable alignment of the quadrupole with the octopole (l=3) is substantially weakened, increasing dramatically the likelihood of the "cleaned" maps
A solution has been found?
CONCLUSIONS
We have presented circumstantial evidence that an extended foreground near the dipole axis could be distorting the CMB. The subtraction of such a foreground increases the quadrupole, removes the (anomalous) quadrupole-octopole alignment, and dramatically increases the overall likelihood of the CMB maps. Possible physical mechanisms that could account for this foreground are the Sunyaev-Zeldovich effect [25] and the Rees-Sciama effect [27], although it should be noted that both options only work in extreme situations that are probably unrealistic. Another possibility is that a combination of effects is responsible for the foreground. However, if the Sunyaev-Zeldovich effect due to the LSC’s gas is indeed responsible for the foreground, it could be directly observed by the Planck satellite [53] within the next few years.
(emphasis mine)
So we should know in a few more years...

Or is this an example of the Copi, Huterer, Schwarz and Starkmanon suggestion (my post # 59) that:
there is a conspiracy of systematics and foreground to cancel the true cosmological correlations.
Garth
 
Last edited:
  • #61
What is it about the WMAP results that tells us the universe is flat?

I've been browsing back in this thread and the "What is CMB" thread to answer this question, and gather (mainly from Space Tiger and Garth) that a flat geometry for the universe is deduced from a best fit of theory (treating many factors?) to the high frequency peaks (of an assumed power-law spectrum of evolved primordial density fluctuations?).

Is this anywhere near correct?

I'd like to understand just what is it about the flat geometry that produces the good fit to the data. Perhaps the sensitivity of the Sachs-Wolfe effect to geometry, or something more subtle or quite different?

I realize from the threads that analysing the spectrum is a highly technical matter. But I'd love a simple explanation.
 
Last edited:
  • #62
As fas as I know, the main data to infer about geometry is the angular size of the first peak. The first peak is a primary anisotropy, the greatest of acoustic nature. Its physical size is determined by the size of the particle horizon at decoupling. The relation between the observed angular size of the first peak and its physical size depends on the geometry of space and on the distance to the last scattering surface. Thus, to infer about geometry from the observed angular scale of the firs peak it is needed an assumption about the size of the particle horizon at decoupling as well as about the distance to the last scattering surface. For example, a positive curvature implies that the physical scale of the first peak should be smaller than in a flat space.
 
Last edited:
  • #63
hellfire, oldman,
maybe we could do a calculator experiment about this!

we know, because of the temperature, that the CMB is at z = 1100.

we could convert that to distance using two assumptions, with Ned Wright's calculator: we could assume flat and we could assume some nonflat case.

then we would get two different figures for the area of the lastscatter sphere---two different ideas of the actual physical size of the universe at the time of decoupling.

this seems doable (using Wright's calculator) with some simple arithmetic

from the temperature we could deduce the speed of sound in that medium, and we would have two separate cases of what the size of the medium is---maybe we could get some intuition about what hellfire says about the size of the first acoustic peak.

then, we would expect that the angular size we calculate in the FLAT case would match the observed CMB power spectrum---i.e. fit the mottled way it looks. and the angular size we calculate from the NONFLAT case would NOT match the observed CMB picture. so we would be doing a crude imitation of the professional CMB analyst routine.

hands dirty CMB interpretation you can do in your own kitchen. I would like to see it, if anyone's game.
 
Last edited:
  • #64
The impact of the curvature in the relation between actual size and measured angular size of the first peak is given through the definition of the angular diameter distance [itex]d_A[/itex] (for which you have started a thread recently).

In general, one has that the actual size [itex]l[/itex] relates to the angular size [itex]\theta[/itex]:

[tex]l = d_A \theta[/tex]

In the cosmological calculator you get, for example for z = 1100:

[tex]d^{SCDM}_A[/tex] = 24.08 Mly
[tex]d^{OCDM}_A[/tex] = 74.26 Mly

For:
- The flat model SCDM (standard cold dark matter model): [itex]\Omega = 1[/itex], with [itex]\Omega_m = 1[/itex]
- The open model OCDM (open cold dark matter model): [itex]\Omega = 0.3[/itex], with [itex]\Omega_m = 0.3[/itex]

If you assume a measured angular size of about 1° for both models, you see immediately that the actual size of the first peak would be smaller in the flat model than in the open model.

This may help to illustrate the angular diameter distance in different models.

But the problem is that both models produce a different angular size for the first peak (it cannot be assumed that both are 1° today). To calculate the actual size of the first peak one should proceed as you propose, i.e. the size of the sound horizon should be calculated. How to do this I don't know.

Afterwards, aplying then the formula [itex]\theta = l / d_A[/itex], you could calculate the expected angular size in sky of the first peak for both models.
 
Last edited:
  • #65
hellfire said:
The impact of the curvature in the relation between actual size and measured angular size of the first peak ...

Thanks, Marcus and Hellfire, for your answers and calcs. It seems that finding out from the WMAP results that the universe's geometry is flat is much easier than I thought. Viva Euclid!
 
  • #66
I have read that it is usually assumed that the speed of sound during recombination is equal to [itex]c_s = c/ \sqrt{3}[/itex] (would be nice if someone could check this). The sound horizon is:

[tex]s = \int_0^{t_{rec}} dt \frac{c_s}{a}[/tex]

this would mean that it is [itex]1 / \sqrt{3}[/itex] times the size of the particle horizon at recombination. In my calculator I have an output of the particle horizon for a specific redshift, but I see now that I have made a silly mistake there and that this output field is incorrect. I will see if I can correct this today. Then we would have the values [itex]s^{OCDM}[/itex], [itex]s^{SCDM}[/itex].

Next step would be to know how to go from the value of [itex]s[/itex] to the size of the first peak...?
 
Last edited:
  • #67
For the [itex]\Lambda[/itex]CDM model we might try to do the calculation backwards. First, the angular diameter distance for z = 1100 is:

[tex]d^{\Lambda CDM}_A[/tex] = 41.34 Mly

Now, if I assume that the size of the first peak is equal to the sound horizon size [itex]l = s[/itex] (?), then, applying [itex]l = d_A \theta[/itex], with [itex]\theta[/itex] = 1°, I would get:

[tex]s^{\Lambda CDM}[/tex] = 0.7 Mly

This value seams meaningful to me. But the point is this value for [itex]s[/itex] should arise from the horizon formula I put above. Then, inserting in [itex]l = d_A \theta[/itex] it should lead to the angular size of the first peak. I have been trying to modify my calculator to give such an output but I did not succeed; the value for s I am getting with the calculator is about 833 Mly.

Could anyone find out the value of the particle horizon and sound horizon at z = 1100 for the [itex]\Lambda[/itex]CDM model?
 
Last edited:
  • #68
hellfire, I want to echo the appreciation expressed by oldman.
Your demonstration of the down-and-dirty-nitty-gritty of the first acoustic
seems to satisfy oldman (at least for now) and although I have not
followed all your steps I feel generally better about it too.
sometime I hope we locate an online tutorial article about this,
but for now we seem to have gotten a better handle on it.
 
  • #69
marcus said:
Your demonstration ... of the first acoustic
seems to satisfy oldman (at least for now) and ...I feel generally better about it too.

Marcus, in this thread Hellfire and yourself seem to agree that the main flat-geometry-indicator, if I may call it that, is the angular width of the first acoustic peak. Fine -- I think I grasp the explanations you both so kindly gave.

Yet in your most recent post in the thread "WMAP 3 and spatial closure" (#108) you quoted a statement: "...However, altering the geometry of universe mainly affects the positions of the CMB acoustic peaks,..." This seems to imply that it is the position of the peaks rather than a width that is the main flat-geometry-indicator. This confuses me again.


The WMAP results are rich and important to grasp. A list of the main conclusions, each with a statement of which feature/s of the results they are attributed to would be very illuminating for the uninformed. Tabulated along lines like:

Geometry is flat..... deduced from first peak angular width

Baryonic matter is 4%...deduced from fit to peaks l > 150

Dark energy is 75% ... deduced from distance between 3rd and 4th peaks

and so on, perhaps. (The entries above are of course a fiction ... I have no idea of how to draw such a table up).

The tutorial article you mentioned sounds like a good idea!
 
  • #70
Geometry is flat..... deduced from first peak angular width
This is a statement agreed by all in the community.

My personal beef - this statement more generally should be:

"Geometry is conformally flat...deduced from first peak angular width"
as the WMAP data is angular in nature and conformal transformations are angle preserving.

Garth
 
Back
Top