Observational evidence against expanding universe in MNRAS

In summary, after a long discussion, it has been acknowledged that this paper, which has been published in a reputable peer-reviewed journal, can be useful and constructive. The paper challenges the expanding universe hypothesis and presents evidence that contradicts predictions made based on this hypothesis. The alternative hypothesis proposes that the universe is not expanding and that there is a linear relationship between redshift and distance. This hypothesis has been found to fit observational data as well as the commonly accepted LCDM model, but without the need for any free parameters. It is also noted that the observed phenomena of galaxy formation, star formation, and nuclear fusion do not require expansion to occur. Other possibilities, such as a fractal distribution of matter or a weakening of gravity at large distances,
  • #71
On p. 684 of this reference it gives the resolution (FWHM) of the GALEX telescope as 4.2 arcsec in FUV and 5.3 arcsec in NUV.

As explained in our 2014 paper, we based the actual resolution on what was the minimum radius galaxy that the telescope, plus the algorithms used to account for the PSF, classified as an extended source. This could be done unambiguously because the stellarity distribution produced by these algorithms (the ones in the GALEX and HUDF catalogs respectively) had two clear peaks for point sources and extended ones. So setting the threshold anywhere near where we did at 0.4 produces the same results. Unsurprisingly this measure of radius resolution turns out to be half the FWHM resolution. In the 2014 paper we also consider the effect on our conclusions of the uncertainty in the determination of the resolutions.

As you point out, the GALEX designers picked the focal length to produce a wide field of view, which also limited the resolution to a much larger value (poorer resolution) than was optically possible. You can't optimize a single telescope for everything.

For HUDF, we checked the actual resolution in each filter by the same method and did not see a significant variation. This is what the data showed, so this is what we went with.

We do note that at z=5 a large fraction of the galaxies are not resolved. The median is not a good estimate of population mean once most of the galaxies in the redshift bin are not resolved, so HST measurements much beyond z=5 are not going to be highly reliable.
 
Space news on Phys.org
  • #72
elerner said:
OK, great, just a misunderstanding! Then you agree that my paper is a test of the the expanding universe prediction and that the predictions are contradicted by the data?

@elerner, this rhetorical style is not going to help the discussion. As @Dale has requested several times now, please use the PF quote feature to specify exactly what statements you are responding to. Otherwise the discussion will go nowhere and this thread will end up being closed.
 
  • #73
To rephrase: Dale, do you now agree that my paper is a test of the expanding universe prediction and that these predictions are contradicted by the data?

If not, please provide quotations from published literature that indicates why it is not. If you use an argument about "small scale", please include quotes on what quantitative scale expansion is no longer theorized to occur so that this can be compared with the 200-800 Mpc range that is the smallest scale measured in my paper.

To be totally clear, and to repeat what it is in the paper, this is a test of the hypothesis that the universe is expanding using one set of data. An overall assessment requires considering all available data sets. Also, this is a test of the hypothesis that the Hubble relation is caused by expansion. The LCDM model includes this hypothesis, so if there is no expansion, LCDM is also wrong. But LCDM also includes several other hypotheses: hot Big Bang, inflation, dark matter, dark energy, etc.

In response to Jonathan Scott, Tolman's calculations are based only on the hypothesis that the Hubble relation is due to expansion. The (1+z)^-3 reduction in surface brightness is independent of any details of how fast the expansion occurs at any point in time. That's why the Tolman test is a test of all expansion models, not any specific ones.
 
  • #74
elerner said:
The (1+z)^-3 reduction in surface brightness

Shouldn't this be ##(1 + z)^{-4}##?
 
  • Like
Likes ruarimac
  • #75
I think this paper is getting a tad over-sold. As the title of the paper carefully states this paper does not point to a problem with standard cosmology or LCDM, it merely conflicts with a model. What the paper has shown is that a model of galaxy size evolution combined with concordance cosmology are not compatible with a UV surface brightness test. I cannot stress enough it is the combination of both galaxy evolution model and cosmology that is being tested, not just cosmology. This is the main reason the Tolman test does not play a big part in modern cosmology, because of this degeneracy between the effects of galaxy evolution and cosmology. In this case however it was done in the rest-frame UV which means it will be incredibly sensitive to galaxy evolution, because the UV properties of a galaxy change on shorter timescales than the rest-frame optical, for example.

To frame the discussion about the paper. A reference from the 2014 paper which compared observations to an LCDM-like cosmology but did not attempt to model evolution so it wasn't actually LCDM:

In this paper, we do not compare data to the LCDM model. We only remark that any effort to fit such data to LCDM requires hypothesizing a size evolution of galaxies with z.

What seems to be done in this new paper is include a single model of galaxy size evolution. This is hardly surprising however as that model which is being tested is not a model of ultraviolet sizes of galaxies. It's from a paper written 20 years ago which has to assume disks all have the same mass to light ratio to calculate a luminosity at all. This model doesn't include the formation of stars. The model is outputting disk scale lengths, not ultraviolet radii. On this basis I think the comparison is apples to oranges so it's hardly surprising there is disagreement. There are a range of sophisticated galaxy formation simulations available today, they would be a much better comparison given they represent the leading edge of the field and that the selection function could be applied to them.

I reiterate, this paper is not evidence there is something wrong with standard cosmology. It is a test of a model of the size evolution of galaxies and cosmology.
 
  • Like
Likes Dale
  • #76
elerner said:
Dale, do you now agree that my paper is a test of the expanding universe prediction and that these predictions are contradicted by the data?
I still stand by my previous very clear assessment of your paper which I posted back in post 23:
Dale said:
I do accept your paper as evidence against the standard cosmology, just not as very strong evidence due to the issues mentioned above. So (as a good Bayesian/scientist) it appropriately lowers my prior slightly from “pretty likely” to “fairly likely”, and I await further evidence.
The only thing that I would change is to include “issues mentioned below” as well.

elerner said:
Also, this is a test of the hypothesis that the Hubble relation is caused by expansion. The LCDM model includes this hypothesis, so if there is no expansion, LCDM is also wrong.
This is a speculative claim that is not made in the paper.
 
Last edited:
  • #77
ruarimac said:
This is the main reason the Tolman test does not play a big part in modern cosmology
Are there any references describing this view of the Tolman test?
 
  • #78
elerner said:
On p. 684 of this reference it gives the resolution (FWHM) of the GALEX telescope as 4.2 arcsec in FUV and 5.3 arcsec in NUV.

As explained in our 2014 paper, we based the actual resolution on what was the minimum radius galaxy that the telescope, plus the algorithms used to account for the PSF, classified as an extended source. This could be done unambiguously because the stellarity distribution produced by these algorithms (the ones in the GALEX and HUDF catalogs respectively) had two clear peaks for point sources and extended ones. So setting the threshold anywhere near where we did at 0.4 produces the same results. Unsurprisingly this measure of radius resolution turns out to be half the FWHM resolution. In the 2014 paper we also consider the effect on our conclusions of the uncertainty in the determination of the resolutions.

As you point out, the GALEX designers picked the focal length to produce a wide field of view, which also limited the resolution to a much larger value (poorer resolution) than was optically possible. You can't optimize a single telescope for everything.

For HUDF, we checked the actual resolution in each filter by the same method and did not see a significant variation. This is what the data showed, so this is what we went with.

We do note that at z=5 a large fraction of the galaxies are not resolved. The median is not a good estimate of population mean once most of the galaxies in the redshift bin are not resolved, so HST measurements much beyond z=5 are not going to be highly reliable.
Thanks for your response there Eric.
To help resolve this matter, we have lodged a request directly with Galex to see if they can provide their actual performance data on angular resolution ... (fingers crossed). We'll get back on this when we have their response.
In the meantime, if this thread gets locked and as an alternative, we could continue the conversation at the IS forum ('Evidence against concordance cosmology' thread).
Cheers
 
  • #79
elerner said:
In response to Jonathan Scott, Tolman's calculations are based only on the hypothesis that the Hubble relation is due to expansion. The (1+z)^-3 reduction in surface brightness is independent of any details of how fast the expansion occurs at any point in time. That's why the Tolman test is a test of all expansion models, not any specific ones.
Tolman's 1930 paper clearly refers specifically to a spatially curved (presumably closed) universe, which I think was assumed to be the case at the time, so even if Tolman's calculations are correct, his assumptions are not necessarily correct. I'd say the new paper provides evidence against a spatially curved universe, but I don't know what the relevance of that is to current cosmology.
 
  • Like
Likes Dale
  • #80
elerner said:
To be totally clear, and to repeat what it is in the paper, this is a test of the hypothesis that the universe is expanding using one set of data. An overall assessment requires considering all available data sets. Also, this is a test of the hypothesis that the Hubble relation is caused by expansion. The LCDM model includes this hypothesis, so if there is no expansion, LCDM is also wrong. But LCDM also includes several other hypotheses: hot Big Bang, inflation, dark matter, dark energy, etc.

You paper has not disproved the expanding universe, as you clearly state in the title this is a test of a model of size evolution plus an expanding universe. I'm sure you had to negotiate that one with the referee but it's not some irrelevant point. You have tested some model of size evolution plus concordance cosmology, you clearly take the interpretation that it is the cosmology that is wrong but you have not demonstrated that.

elerner said:
As explained in our 2014 paper, we based the actual resolution on what was the minimum radius galaxy that the telescope, plus the algorithms used to account for the PSF, classified as an extended source. This could be done unambiguously because the stellarity distribution produced by these algorithms (the ones in the GALEX and HUDF catalogs respectively) had two clear peaks for point sources and extended ones. So setting the threshold anywhere near where we did at 0.4 produces the same results. Unsurprisingly this measure of radius resolution turns out to be half the FWHM resolution. In the 2014 paper we also consider the effect on our conclusions of the uncertainty in the determination of the resolutions.

How exactly do you model this selection effect in the Mo et al. model? You don't describe your model in detail at all but you state that it predicts the disk scale length varies as H(z) to some power at fixed luminosity, but that doesn't take into account the fact that you have a biased sample.
 
Last edited:
  • Like
Likes Dale
  • #81
elerner said:
On p. 684 of this reference it gives the resolution (FWHM) of the GALEX telescope as 4.2 arcsec in FUV and 5.3 arcsec in NUV. As explained in our 2014 paper, we based the actual resolution on what was the minimum radius galaxy that the telescope, plus the algorithms used to account for the PSF, classified as an extended source. This could be done unambiguously because the stellarity distribution produced by these algorithms (the ones in the GALEX and HUDF catalogs respectively) had two clear peaks for point sources and extended ones. So setting the threshold anywhere near where we did at 0.4 produces the same results. Unsurprisingly this measure of radius resolution turns out to be half the FWHM resolution. In the 2014 paper we also consider the effect on our conclusions of the uncertainty in the determination of the resolutions. As you point out, the GALEX designers picked the focal length to produce a wide field of view, which also limited the resolution to a much larger value (poorer resolution) than was optically possible. You can't optimize a single telescope for everything. For HUDF, we checked the actual resolution in each filter by the same method and did not see a significant variation. This is what the data showed, so this is what we went with. We do note that at z=5 a large fraction of the galaxies are not resolved. The median is not a good estimate of population mean once most of the galaxies in the redshift bin are not resolved, so HST measurements much beyond z=5 are not going to be highly reliable.
The FWHM values quoted by Galex seem to be based on the procedures described in: 'Section 5. RESOLUTION', (pg 691).

As described in the procedure, bright stars are used that lead to saturated cores in the PSF.
Saturation or near saturation leads to high FWHM values; the procedure is performed to show details in the wings.
In reality, if the FWHM values are a true indication of angular resolution performance, then the Galex scope has either (i) poor optics, (ii) is slightly out of focus or, (iii) is seeing limited due to atmospheric interference effects. (iii) Is obviously not applicable to Galex and can be eliminated.

Assuming the FWHM values given are a true indication of scope performance issues, then according to Eric (etal's) method, any value less than the FHWM is a point source and beyond measurement, yet the method cut off is around 50% of the FWHM values for the FUV and NUV bands, which then appears to simply be in error(?)

Also we maintain that the dependence of wavelength on resolution is still an untested issue with the analysis method, and cannot be ignored.
As per the Hubble site:
Hubble said:
Here we will try to answer the related question of how close together two features can be and still be discerned as separate – this is called the angular resolution.

The Rayleigh criterion gives the maximum (diffraction-limited) resolution, R, and is approximated for a telescope as
R = λ/D, where R is the angular resolution in radians and λ is the wavelength in metres. The telescope diameter, D, is also in metres.

In more convenient units we can write this as:
R (in arcseconds) = 0.21 λ/D, where λ is now the wavelength in micrometres and D is the size of the telescope in metres.

So for Hubble this is:
R = 0.21 x 0.500/2.4 = 0.043 arcseconds (for optical wavelengths, 500 nm) or
R = 0.21 x0.300/2.4 = 0.026 arcseconds (for ultraviolet light, 300 nm).

Note that the resolution gets better at shorter wavelengths, so we will use the second of these numbers from now on.
 
  • #82
And here we go with a Galex response:

Galex Performance, http://www.galex.caltech.edu/DATA/gr1_docs/GR1_Mission_Instrument_Overview_v1.htm:

"The design yields a field-averaged spot size of 1.6 arcsec (80%EE) for the FUV imagery and 2.5 arcsec (80%EE) for the FUV spectroscopy at 1600�. NUV performance is similar. There is no in-flight refocus capability".

So, the above 1.6 arcsec figure for the FUV imagery is much higher than the theoretical diffraction limited performance calculated by us earlier, but it is nowhere near the 4.2 arcsec FHWM in FUV figures used by Eric etal (as being indicative of actual performance)!

 
Last edited:
  • #84
newman22 said:
Galex resolution found here: 4.3 and 5.3 arcsec respectively http://www.galex.caltech.edu/researcher/techdoc-ch2.html
I think that's cited using the FWHM metric, whereas the 1.6 arcsec figure given in the GR1 Optical Design section is based on the Encircled Energy metric (80%) ... all of which then raises another question for Eric:

Was the system modelling used in his "UV surface brightness of galaxies from the local Universe to z ~ 5" paper, to come up with the 1/38 ratio of (θmGALEX/θmHUDF), sufficiently detailed as to compensate for the two different methods typically quoted for characterising the respective HUDF and Galex optical performance figures?

If so, then how was this done?
 
  • #85
I think it's fair to say that the robustness of the methods used in Lerner+ (2014) (L14), especially those using GALEX data, are critical to this paper (Lerner 2018). For example, in Figure 2:

"The log of the median radii of UV-bright disk galaxies M ~-18 from Shibuya et al, 2016 and the GALEX point at z=0.027 from Lerner, Scarpa and Falomo, 2014 is plotted against log of H(z) ,the Hubble radius at the given redshift"

Remove that "GALEX point at z=0.027" and I doubt that many (any?) of the results/conclusions would be valid.

There seems, to me, to be what could be a serious omission in L14; maybe you could say a few words about it, elerner?

"These UV data have the important advantage of being sensitive only to emissions from very young stars."

Well, AGNs are known to be (at least sometimes) strong emitters of UV. And in GALEX they'd appear to be indistinguishable from PSFs (by themselves). They can also make a galaxy appear to have a lower Sersic index ("Sersic number" in L14) if the galaxy is fitted with a single-component radial profile. Finally, in comparison to the luminosity of the rest of the galaxy, an AGN can range from totally dominant (as in most QSOs) to barely detectable.

My main question about L14 (for now) is about this:

"For the GALEX sample, we measured radial brightness profiles and fitted them with a Sersic law, [...]"

Would you please describe how you did this, elerner? I'm particularly interested in the details of how you did this for GALEX galaxies which are smaller than ~5" (i.e. less than ~twice the resolution or PSF width).

To close, here's something from L14 that I do not understand at all; could someone help me please (doesn't have to be elerner)?

"Finally we restricted the samples to disk galaxies with Sersic number <2.5 so that radii could be measured accurately by measuring the slope of the exponential decline of SB within each galaxy."
 
  • Like
Likes Dale
  • #86
Correction: “... which are smaller than ~10” (i.e less than ~twice the resolution...).”. Per what’s in some earlier posts, and GALEX, the resolution, in both UV bands, is ~5”.
 
  • #87
Unless Eric finds time to respond, we're going to have to conclude our concerns as follows:

Eric etal's method allows more Galex data to be included in the analysis because his Galex data cutoffs are ~50% lower (2.4 and 2.6 arcsecs) than what he claims to be Galex scope resolution limits (ie: 4.2 and 5.3 arcsec FWHM for FUV and NUV respectively). His method doesn't appear to explicitly address and correct for this.

Then, for the Hubble data: the proposed cutoffs don't appear to vary with the wavelength of the observations, as they approach the theoretical (Rayleigh) optical limits of the scope.

The 1/38 ratio figure used, seems to have no relevance in the light of the issues outlined above.

The Hubble data itself, thus refutes the methodology, due to its failure in finding resolution differences in the individual HUDT filter data.

If Eric agrees with the above, then it would be very nice for him to consider some form of formalised corrective measures.

Cheers
 
  • #88
Jean Tate said:
Remove that "GALEX point at z=0.027" and I doubt that many (any?) of the results/conclusions would be valid.
Without that point the concordance model is a good fit, as already shown in the previous literature. If that one point is faulty then there isn’t anything else in the paper.

I wondered if something were systematically different in the methodology for that point:
Dale said:
which could therefore have measurements which were non-randomly different from the remainder of the dataset,
 
Last edited:
  • #89
Papers "challenging the mainstream" in MNRAS and other leading astronomy/astrophysics/cosmology peer-reviewed journals are unusual but Lerner (2018) (L18) is certainly not unique.

I think L18 offers PhysicsForums (PF) a good opportunity to illustrate how science works (astronomy in this case). In a hands-on way. And in some detail. Let me explain.

One core part of science may be summed up as "objective, and independently verifiable". I propose that we - PFers (PFarians? PFists?) - can, collectively, objectively and independently verify at least some of the key parts of the results and conclusions reported in L18. Here's how:

As I noted in my earlier post, the robustness of the methods used in Lerner+ (2014) (L14), especially those using GALEX data, are critical to L18. I propose that we - collectively - go through L14 and attempt to independently verify the "GALEX" results reported therein (we could also do the same for the "HUDF" results, but perhaps that might be a tad ambitious).

I am quite unfamiliar with PF's mores and rules, so I do not really have any feel for whether this would meet with PF's PTB approval. Or if it did, whether this thread/section is an appropriate place for such an effort. But I'm sure I'll hear one way or the other soon!

Anyway, as the saying goes, "better to ask for forgiveness than permission". So I'll soon be posting several, fairly short, posts. Posts which actually kick off what I propose, in some very concrete ways.
 
  • #90
Question for elerner: how easy would it be, in your opinion, to independently reproduce the GALEX and HUDF results published in Lerner+ (2014) (L14)?

To help anyone who would want to do this, would you please create and upload (e.g. to GitHub) a 'bare bones' file (CSV, FITS, or other common format) containing the GALEX and HUDF data you started with ("galaxies (with stellarity index < 0.4) vs. angular radius for all GALEX MIS3-SDSSDR5 galaxies and for all HUDF galaxies")? Alternatively, would you please describe where one can obtain such data oneself?

Thank you in advance.
 
  • #91
If one does not have a workable answer to my second question above (actually a two-parter), how could you go about obtaining that data yourself?

Start with L14, the paper. The references in fact.

There are 16, and ADS can help you get at least the titles, abstracts, etc (the actual papers may be behind paywalls). Of these 16, I think two may be relevant for the HUDF data (10. Beckwith S. V. W., Stiavelli M., Koekemoer A. M., et al., ApJ 132, (2006) 1729, and 11. Coe D., Bentez N. Snchez S. F., Jee M., Bouwens R., Ford H., AJ 132, (2006) 926), but none appear relevant for the GALEX data. Do you agree?

Hmm, so maybe the paper itself gives a pointer to the GALEX data?

"... all GALEX MIS3-SDSSDR5 galaxies ..."

What do you think? Can you use that to find where the L14 GALEX data comes from? To actually obtain that data?
 
  • #92
Hi all, I have been busy with other things so have not visited here for the past few days.
In more or less chronological order:

Peter, (1+z)^-3 is correct if we measure in AB magnitudes (per unit frequency). This is the unit the papers use.

Ruarimac:

This is a test of predictions—things written before data is taken. Predictions are crucial in science. If you can’t predict data before you observe it, then you are not doing useful science. Just fitting the data once you have it is useless unless you can use it to predict lots more data that you have not observed. As the saying goes, with 4 variables, I can fit an elephant. That is why I used predictions from before the data was available. In addition, any merger process is contradicted by the observations of the number of mergers and any growth needed to match the data is contradicted by the measurements of gravitational mass vs stellar mass—unless you want to hypothesize negative mass dark matter (as I’m sure someone will do.)

Jonathan Scott:

Tolman’s derivation does not depend on curvature. You can find it in many places in the literature since 1930. It only depends on expansion.

On GALEX, measurement, etc.

Selfsim—you did not read my comment that my measured resolution refers to radius while FWHM refers to diameter. The key point is that with both Hubble and GALEX the resolution is mainly linked to the pixel size. That is why it is not linked to the wavelength—the pixel size does not change with wavelength.

Jean Tate: Not just the point at 0.027 but all the low z points up to z=0.11 are used for comparisons with our 2014 data. The whole point of the Tolman test is to compare sizes as we measure them at low z, where there is no cosmic distortion, with those at high z (or comparing SB of the same luminosity galaxies, which is the same as measuring size). So you can’t drop the near points if you want to do the test. The reason we can measure tiny galaxies is that when we talk about radius, that is half-light radius, the radius that contains half the light. Since disk galaxy light falls off exponentially, you can observe these bright galaxies way out beyond their half light radius and thus you can get very nice fits to an exponential line. The Sersic number is used as a cutoff between disk galaxies and ellipticals. AGNs don’t interfere as we dropped the central area of the galaxy which is most affected by the PSF blurring. The exponential fit starts further out—all explained in the 2014 paper. By the way, I don’t think checking our measurements is all that useful as we already checked them against the GALEX catalog, and they are quite close. But we wanted to make sure we were measuring HUDF and GALEX the exact same way.

Sure I can put our old 2014 data up somewhere. It would be great to have others work on it. Where would you suggest? However, it is by no means the most recent data release. I can also post how to get the more recent data. But not tonight.
 
  • #93
elerner said:
Peter, (1+z)^-3 is correct if we measure in AB magnitudes (per unit frequency).

Ok, got it.
 
  • #94
elerner said:
... Selfsim—you did not read my comment that my measured resolution refers to radius while FWHM refers to diameter. The key point is that with both Hubble and GALEX the resolution is mainly linked to the pixel size. That is why it is not linked to the wavelength—the pixel size does not change with wavelength.
Eric - Thanks for your reply.

However, as described by the Rayleigh criterion, (θ = 1.220λ/D), where the resolution is decreased by filter choice, more light ends up falling onto adjacent pixels which then affects the radius (or diameter) of the FHWM (or ensquared energy value).

In an attempt to put the resolution vs filter issue to rest, in the interim, we've performed a small test of our own by downloading some f475w and f814w filtered data for the object Messier 30 from the Hubble legacy archive site.

The ACS was used but unlike the HUDF images an 814W filter was used instead of the 850LP.

Unlike our previous discussion where only the optics were considered, the system angular resolution which links both wavelength and pixel size is defined by the equation.

System angular resolution ≈ [(0.21λ/2.4)² + (0.05²)]º̇⁵

The 0.05 term is the size of the ACS pixels in arcsec.
For a 475W filter the theoretical resolution is 0.066 arcsec and the 814W filter is 0.087 arcsec.

For the Messier 30 data, the same six stars of medium brightness were handpicked for each filter so as not to distort the FWHM measurements. (AIP4WIN software was used for the measurements).


In all cases, the FWHM of the stars were higher in the 814W data, the results being:

475W data FWHM = 0.279 +/- 0.035
814W data FWHM = 0.326 +/- 0.058

A larger sample size would have been preferable but the dependence of resolution on wavelength is clearly evident.

Irrespective of what method you employ, a lack of differentiation of resolution between the Hubble filtered data, is telling.
 
Last edited:
  • #95
elerner said:
This is a test of predictions—things written before data is taken. Predictions are crucial in science. If you can’t predict data before you observe it, then you are not doing useful science. Just fitting the data once you have it is useless unless you can use it to predict lots more data that you have not observed. As the saying goes, with 4 variables, I can fit an elephant. That is why I used predictions from before the data was available. In addition, any merger process is contradicted by the observations of the number of mergers and any growth needed to match the data is contradicted by the measurements of gravitational mass vs stellar mass—unless you want to hypothesize negative mass dark matter (as I’m sure someone will do.)

You're just making broad statements without actually addressing the points. I never mentioned fitting, please do not misrepresent my words.

You have tested a very specific model of disk size evolution combined with cosmology, but you are selling this as evidence against cosmology specifically when you haven't demonstrated that. You believe it's not mergers but that's hardly the only thing absent from this model. Take for example the fact that this model is not modelling the UV sizes which you are comparing it to. Or the fact that you haven't tested the effect of your cuts to the data, you will have selection effects which will change with redshift due to using different bands. Or the fact you haven't made K corrections due to the fact that the different filters have different transmission curves. Or the fact that in applying this model to UV surface brightness you don't take into account the varying star formation rate density with time in an expanding universe, as observed. Or the fact you have to assume the Tully-Fisher relation is fixed up to z~5 and that it applies to all of your galaxies. And then there's the effect of mergers and blending.

As I said before you have tested a single model which has all of these shortcomings, you do not justify why this is the model above all others that should be correct. This was not the only model available. You haven't demonstrated that this mismatch is a problem with cosmology and not with your attempt to model the observations. You haven't convinced me this is a problem with cosmology given that requires relying on a single model and a shopping-list of assumptions.
 
  • Like
Likes weirdoguy
  • #96
Thanks for your reply and continued interest in your paper, elerner!
elerner said:
Hi all, I have been busy with other things so have not visited here for the past few days.
In more or less chronological order:

<snip>

On GALEX, measurement, etc.

<snip>

Jean Tate: Not just the point at 0.027 but all the low z points up to z=0.11 are used for comparisons with our 2014 data. The whole point of the Tolman test is to compare sizes as we measure them at low z, where there is no cosmic distortion, with those at high z (or comparing SB of the same luminosity galaxies, which is the same as measuring size). So you can’t drop the near points if you want to do the test. The reason we can measure tiny galaxies is that when we talk about radius, that is half-light radius, the radius that contains half the light. Since disk galaxy light falls off exponentially, you can observe these bright galaxies way out beyond their half light radius and thus you can get very nice fits to an exponential line. The Sersic number is used as a cutoff between disk galaxies and ellipticals. AGNs don’t interfere as we dropped the central area of the galaxy which is most affected by the PSF blurring. The exponential fit starts further out—all explained in the 2014 paper. By the way, I don’t think checking our measurements is all that useful as we already checked them against the GALEX catalog, and they are quite close. But we wanted to make sure we were measuring HUDF and GALEX the exact same way.

Sure I can put our old 2014 data up somewhere. It would be great to have others work on it. Where would you suggest? However, it is by no means the most recent data release. I can also post how to get the more recent data. But not tonight.
I have many, many questions. Some come from my initial reading of Lerner (2018) (L18); some from your latest post. I will, however, focus on just a few.
So you can’t drop the near points if you want to do the test.
My primary interest was, and continues to be, Lerner+ (2014) (L14). However, I see that you may have misunderstood what I wrote; so let me try to be clearer.

I "get" that Lerner (2018) (L18) must include some low z data. And I think I'm correct in saying that L18 relies critically on the robustness and accuracy of the results reported in L14. In particular, the "the GALEX point at z=0.027 from Lerner, Scarpa and Falomo,2014". Does anyone disagree?

It makes little difference if that GALEX point is at z=0, or z=0.11, or anywhere in between. Does anyone disagree?

However, it makes a huge difference if that GALEX point is not near Log (r/kpc) =~0.8.

I am very interested in understanding just how robust that ~0.8 value is. Based on L14.
AGNs don’t interfere as we dropped the central area of the galaxy which is most affected by the PSF blurring. The exponential fit starts further out—all explained in the 2014 paper.
Actually, no. It is not all so explained.

I've just re-read L14; a) AGNs are not mentioned, and b) there's no mention of dropping the central area for galaxies which are smaller than ~10" ("PSF blurring" is almost certainly important out to ~twice the PSF width).

There are two questions in my first post in this thread which you did not answer, elerner; perhaps you missed them?

Here they are again:

JT1) In L14, you wrote: "For the GALEX sample, we measured radial brightness profiles and fitted them with a Sersic law, [...]".
Would you please describe how you did this? I'm particularly interested in the details of how you did this for GALEX galaxies which are smaller than ~10" (i.e. less than ~twice the resolution or PSF width).

JT2) In L14, you wrote: "Finally we restricted the samples to disk galaxies with Sersic number <2.5 so that radii could be measured accurately by measuring the slope of the exponential decline of SB within each galaxy."
I do not understand this. Would you please explain what it means?
Sure I can put our old 2014 data up somewhere. It would be great to have others work on it. Where would you suggest?
I've already made one suggestion (GitHub); perhaps others have other suggestions?

By the way, when I tried to access L18 (the full paper, not the abstract) from the link in the OP, I got this message:

"You do not currently have access to this article."

And I was invited to "Register" to get "short-term access" ("24 Hours access"), which would cost me USD $33.00. So instead I'm relying on the v2 arXiv document (link). Curiously, v2 was "last revised 2 Apr 2018", but "Journal reference: Monthly Notices of the Royal Astronomical Society, sty728 (March 22, 2018)". Could you explain please elerner?
 
  • #97
ruarimac said:
You're just making broad statements without actually addressing the points. I never mentioned fitting, please do not misrepresent my words.

You have tested a very specific model of disk size evolution combined with cosmology, but you are selling this as evidence against cosmology specifically when you haven't demonstrated that. You believe it's not mergers but that's hardly the only thing absent from this model. Take for example the fact that this model is not modelling the UV sizes which you are comparing it to. Or the fact that you haven't tested the effect of your cuts to the data, you will have selection effects which will change with redshift due to using different bands. Or the fact you haven't made K corrections due to the fact that the different filters have different transmission curves. Or the fact that in applying this model to UV surface brightness you don't take into account the varying star formation rate density with time in an expanding universe, as observed. Or the fact you have to assume the Tully-Fisher relation is fixed up to z~5 and that it applies to all of your galaxies. And then there's the effect of mergers and blending.

As I said before you have tested a single model which has all of these shortcomings, you do not justify why this is the model above all others that should be correct. This was not the only model available. You haven't demonstrated that this mismatch is a problem with cosmology and not with your attempt to model the observations. You haven't convinced me this is a problem with cosmology given that requires relying on a single model and a shopping-list of assumptions.
(my bold)

L14 seems replete with such assumptions.

Both explicitly stated and not. Such as some concerning AGNs, one aspect of which I addressed in my last post:
Jean Tate said:
elerner said:
AGNs don’t interfere as we dropped the central area of the galaxy which is most affected by the PSF blurring. The exponential fit starts further out—all explained in the 2014 paper.
Actually, no. It is not all so explained.

I've just re-read L14; a) AGNs are not mentioned, [...]
Reminder; here's what's in L14:
L14 said:
These UV data have the important advantage of being sensitive only to emissions from very young stars.
In his last post here, elerner seems to have hinted at another, unstated assumption:
elerner said:
The Sersic number is used as a cutoff between disk galaxies and ellipticals.
The implication (not even hinted at in L14) is that the only galaxy morphological classes are "disk galaxies" and "ellipticals". Or at least, only those two in the UV. The L14 authors seem to have been unaware of the extensive literature on this topic ...
 
  • #98
Hard for me to keep up with all of you in the time available. Simple things first. The new version corrects a reference and will soon be posted on MNRAS as well. If you missed the free download, go to our website https://lppfusion.com/lppfusion-chi...-against-cosmic-expansion-in-leading-journal/ and click on “paper” to get a free copy. I can’t post the link directly without violating their rules.

Here is how we did the measurements in 2014:

To measure total flux and half light radius, we extracted the average surface brightness profile for each galaxy from the HUDF or GALEX images. The apparent magnitude of each galaxy is determined by measuring the total flux within a fixed circular aperture large enough to accommodate the largest galaxies, but small enough to avoid contamination from other sources. To choose the best aperture over which to extract the radial profile, for each sample we compared average magnitudes and average radii as derived for a set of increasingly large apertures. We then defined the best aperture as the smallest for which average values converged. We found that these measurements are practically insensitive to the chosen aperture above this minimum value.

Finally, to determine scale-length radius, we fitted the radial brightness profile with a disk law excluding the central 0.1 arcsec for HST and 5 arcsec for GALEX , which could be affected by the PSF smearing. Given the magnitude and radius, the SB is obtained via the formulae in Section 2. A direct comparison between our measurements and those in the i band HUDF catalogue (Coe et al 2006) show no significant overall differences.

Here is how we checked for non-disks:

Finally we have checked, by visual inspection of galaxies in the sample, that removing objects exhibiting signatures of interaction or merging do not change our conclusions. The selection of galaxies with disturbed morphology was performed by an external team of nine amateur astronomers evaluating the NUV images and isophote contours of all NUV-sample galaxies. Each volunteer examined the galaxies and only those considered unperturbed by more than 5 people were included in a “gold” sample. Although this procedure reduces the size of the sample, there is no significant difference of the SB-z trend.
 
  • #99
Haven't heard from any PF Mods yet, and it seems that there's rather a lack of interest in my proposal (to independently try to verify the GALEX results reported in L14). So this will likely be my last post on that (my proposal).
Jean Tate said:
If one does not have a workable answer to my second question above (actually a two-parter), how could you go about obtaining that data yourself?

Start with L14, the paper. The references in fact.

There are 16, and ADS can help you get at least the titles, abstracts, etc (the actual papers may be behind paywalls). Of these 16, I think two may be relevant for the HUDF data (10. Beckwith S. V. W., Stiavelli M., Koekemoer A. M., et al., ApJ 132, (2006) 1729, and 11. Coe D., Bentez N. Snchez S. F., Jee M., Bouwens R., Ford H., AJ 132, (2006) 926), but none appear relevant for the GALEX data. Do you agree?

Hmm, so maybe the paper itself gives a pointer to the GALEX data?

"... all GALEX MIS3-SDSSDR5 galaxies ..."

What do you think? Can you use that to find where the L14 GALEX data comes from? To actually obtain that data?
"SDSS" is likely well-known to most readers; it refers to the Sloan Digital Sky Survey, and images from it were used in the hugely successful online citizen science project, Galaxy Zoo (there are quite a few iterations of Galaxy Zoo, using images/data from several surveys other than SDSS, but not GALEX as far as I know).

"DR5" means Data Release 5.

I did not know what "MIS3" meant (maybe I did, once, but forgot); however, it's fairly easy to work out using your fave search (mine is DuckDuckGo) ... "MIS" is Medium Depth Imaging Survey, and "3" likely refers to GALEX DR3.

Both SDSS and GALEX have official websites, and from those it's pretty straight-forward to find out how to access the many data products from those surveys.

Rather than doing that, I'd like to introduce a resource which you may not know about, VizieR. If you enter "GALEX" in the "Find catalogues" box, the first (of four) hits you'll see is "II/312", "GALEX-DR5 (GR5) sources from AIS and MIS (Bianchi+ 2011)", and you have several "Access" choices. True, it's not the GALEX MIS3, but is surely a superset.
 
  • Like
Likes Dale
  • #100
Thank you, elerner.
elerner said:
Hard for me to keep up with all of you in the time available. Simple things first. The new version corrects a reference and will soon be posted on MNRAS as well. If you missed the free download, go to our website https://lppfusion.com/lppfusion-chi...-against-cosmic-expansion-in-leading-journal/ and click on “paper” to get a free copy. I can’t post the link directly without violating their rules.

Here is how we did the measurements in 2014:

To measure total flux and half light radius, we extracted the average surface brightness profile for each galaxy from the HUDF or GALEX images. The apparent magnitude of each galaxy is determined by measuring the total flux within a fixed circular aperture large enough to accommodate the largest galaxies, but small enough to avoid contamination from other sources. To choose the best aperture over which to extract the radial profile, for each sample we compared average magnitudes and average radii as derived for a set of increasingly large apertures. We then defined the best aperture as the smallest for which average values converged. We found that these measurements are practically insensitive to the chosen aperture above this minimum value.

Finally, to determine scale-length radius, we fitted the radial brightness profile with a disk law excluding the central 0.1 arcsec for HST and 5 arcsec for GALEX , which could be affected by the PSF smearing. Given the magnitude and radius, the SB is obtained via the formulae in Section 2. A direct comparison between our measurements and those in the i band HUDF catalogue (Coe et al 2006) show no significant overall differences.

Here is how we checked for non-disks:

Finally we have checked, by visual inspection of galaxies in the sample, that removing objects exhibiting signatures of interaction or merging do not change our conclusions. The selection of galaxies with disturbed morphology was performed by an external team of nine amateur astronomers evaluating the NUV images and isophote contours of all NUV-sample galaxies. Each volunteer examined the galaxies and only those considered unperturbed by more than 5 people were included in a “gold” sample. Although this procedure reduces the size of the sample, there is no significant difference of the SB-z trend.
It'll take me a while to fully digest this, particularly as I want to understand it in terms of the content of L14.

However, I'm even more curious about how you "fitted the radial brightness profile with a disk law excluding the central 0.1 arcsec for HST and 5 arcsec for GALEX".

For example, did you write your own code? Or use a publicly available tool or package? Something else??
 
  • #102
The thread is being reopened in order to allow continued discussion of the specific paper referenced in the OP, and to allow @elerner to respond to specific questions regarding that paper (and the 2014 paper that it is based on). Please limit discussion to that specific topic. This thread is not about the general methodology of science or the overall pros and cons of the current mainstream cosmological model of the universe.

@elerner, in responding to questions, please give specific cites to your papers rather than general claims and opinions. We understand your basic claims; we are looking for the specific evidence and arguments given in your papers that you think support those claims, not repetitions of the claims themselves. Additional fine details of methodology not provided in the papers are fine (since that is a large part of what other posters have asked about).
 
Last edited:
  • Like
Likes Dale and nnunn
  • #103
Jean Tate said:
it seems that there's rather a lack of interest in my proposal (to independently try to verify the GALEX results reported in L14)

This is outside the scope of PF. Independent replication of scientific results is original research, which is not what PF is for.
 
  • #104
ruarimac said:
There are a range of sophisticated galaxy formation simulations available today, they would be a much better comparison given they represent the leading edge of the field and that the selection function could be applied to them.

@elerner, this is one question that I did not see you respond to. Is there a reason why the particular model of galaxy size evolution used in the paper was chosen? Or is there further work planned to apply a similar methodology to a wider range of models of galaxy size evolution?
 
  • #105
elerner said:
sizes as we measure them at low z, where there is no cosmic distortion

Doesn't this contradict your claim that the "expansion hypothesis" applies at all scales, right down to ##z = 0##?
 

Similar threads

Replies
2
Views
1K
Replies
12
Views
4K
Replies
15
Views
5K
Replies
126
Views
33K
Back
Top