Fisher Forecasting For EUCLID Survey Help

In summary, the conversation discusses the attempt to replicate the results of a research paper on obtaining constraints for matter density and Hubble constant h. The speaker is struggling to get accurate results and suspects that the large volume of data being calculated is causing the issue. They provide their Mathematica code for the F_11 element of the Fisher Matrix and mention potential sources of error. They also mention the use of various formulas and integrals in their calculations.
  • #1
xdrgnh
417
0
I'm trying to recreate the results of this paper https://arxiv.org/pdf/1607.08016.pdf
ZDew7.png


to obtain the constraints for the matter density and Hubble constant h.

However every time I try to create there results my Fisher Matrix has elements of order of 10^14 which is far to high. I suspect this is happening because the Vsurvey I'm calculating is so large. I have no idea how they were able to there results. I'll attach my Mathematica code for the F_11 element of the Fisher Matrix. I don't know if I'm am misunderstanding a formula, if its a mathematica error or if there is some missing step.

Parallelize[
Total[Table[(NIntegrate[(E^(0)) ((D[
Log[Pobs,
H]] /. {Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + Z)^3 +
0.00008824284992310034` (1 + Z)^4)), {Z, 0, z}]),
H -> 68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + z)^3 +
0.00008824284992310034` (1 +
z)^4)})^2)*(Veff /. {Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + Z)^3 +
0.00008824284992310034` (1 + Z)^4)), {Z, 0, z}]),
H -> 68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + z)^3 +
0.00008824284992310034` (1 + z)^4)})*
k^2/(8*Pi^2), {u, -1, 1}, {k, 0,
f[z]}]) ((D[(100 hh Sqrt[
1 - 0.000041769223554`/(hh)^2 - MM + MM (1 + z)^3 + (
0.000041769223554` (1 + z)^4)/(hh)^2]),
MM] /. {MM -> .2984, hh -> .688})^2) +
2*(D[(300000/(1 + z)*
NIntegrate[
1/(100 hh Sqrt[
1 - 0.000041769223554`/(hh)^2 - MM + MM (1 + Z)^3 + (
0.000041769223554` (1 + Z)^4)/(hh)^2]), {Z, 0, z}]),
MM] /. {MM -> .2984,
hh -> .688})*(D[(100 hh Sqrt[
1 - 0.000041769223554`/(hh)^2 - MM + MM (1 + z)^3 + (
0.000041769223554` (1 + z)^4)/(hh)^2]),
MM] /. {MM -> .2984, hh -> .688})*
NIntegrate[(E^(0)) (D[
Log[Pobs,
H]] /. {H -> (68.8` Sqrt[
0.7015117571500769` + 0.2984` (1 + z)^3 +
0.00008824284992310034` (1 + z)^4]),
Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + Z)^3 +
0.00008824284992310034` (1 + Z)^4)), {Z, 0,
z}])}) (D[
Log[Pobs,
Da]] /. {Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + Z)^3 +
0.00008824284992310034` (1 + Z)^4)), {Z, 0, z}]),
H -> (68.8` Sqrt[
0.7015117571500769` + 0.2984` (1 + z)^3 +
0.00008824284992310034` (1 +
z)^4])})*(Veff /. {H -> (68.8` Sqrt[
0.7015117571500769` + 0.2984` (1 + z)^3 +
0.00008824284992310034` (1 + z)^4]),
Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + Z)^3 +
0.00008824284992310034` (1 + Z)^4)), {Z, 0,
z}])})*k^2/(8*Pi^2), {u, -1, 1}, {k, 0,
f[z]}] + ((D[(300000/(1 + z)*
NIntegrate[
1/(100 hh Sqrt[
1 - 0.000041769223554`/(hh)^2 - MM + MM (1 + Z)^3 + (
0.000041769223554` (1 + Z)^4)/(hh)^2]), {Z, 0, z}]),
MM] /. {MM -> .2984,
hh -> .688})^2)*(NIntegrate[(E^(0)) ((D[
Log[Pobs,
Da]] /. {H -> (68.8` Sqrt[
0.7015117571500769` + 0.2984` (1 + z)^3 +
0.00008824284992310034` (1 + z)^4]),
Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + Z)^3 +
0.00008824284992310034` (1 + Z)^4)), {Z, 0,
z}])})^2)*(Veff /. {Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + Z)^3 +
0.00008824284992310034` (1 + Z)^4)), {Z, 0, z}]),
H -> 68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + z)^3 +
0.00008824284992310034` (1 + z)^4)})*
k^2/(8*Pi^2), {u, -1, 1}, {k, 0, f[z]}]), {z, .7, 2.1, .1}]]]

This code is supposed to calculate F_11 and

Pmatter =
E^(-k^2*u^2*
rr^2)*(((8 Pi^2*(300000)^4*.002*2.45*10^-9)/(25*((100*h)^4)*
M^2))*

(0.02257`/(h^2*M)*Tb + ((M - 0.02257)/M)*Tc)^2)*((Gz/
Go)^2)*(k/.002)^.96

Pobs = ((Dref)^2*H)/(Da^2*Href)*Pg;
Veff = (((1.2*Pg)/(1.2*Pg + 1))^2)*Vsurvey;

Pg = (1 +
z) (1 + (0.4840378144001318` k^2)/((k^2 + u^2) Sqrt[1 + z]))^2*
Pmatter

geFqS.png


bx2kv.png


CxH6q.png


If anyone has had similar issues, can offer any help or has done this calculation before I will greatly appreciate your help.

Oh and my Vsurvey is Vsurvey = 5.98795694781456`*^11(MPC)^3
 
Space news on Phys.org
  • #2
It's very difficult to parse this code as written, so I don't know where the issue lies. However, very large Fisher matrix values should not be occurring. Those would indicate values which have close to zero variance, i.e. are almost perfectly-determined by the data. That's not likely to be the case.

One would, however, expect a Fisher matrix with incredibly tiny eigenvalues on occasion: this will happen if there are combinations of variables that are not constrained by the data at all.

This may indicate that somewhere in the code you've mixed up your Fisher matrix and your Covariance matrix. Or, if you're computing derivatives as shown at the bottom of your post, that you've got a singularity.
 
  • Like
Likes xdrgnh
  • #3
kimbyd said:
It's very difficult to parse this code as written, so I don't know where the issue lies. However, very large Fisher matrix values should not be occurring. Those would indicate values which have close to zero variance, i.e. are almost perfectly-determined by the data. That's not likely to be the case.

One would, however, expect a Fisher matrix with incredibly tiny eigenvalues on occasion: this will happen if there are combinations of variables that are not constrained by the data at all.

This may indicate that somewhere in the code you've mixed up your Fisher matrix and your Covariance matrix. Or, if you're computing derivatives as shown at the bottom of your post, that you've got a singularity.
I checked each derivative myself and none of them have any singularities. My survey volume is of the order of 10^11 and that is why my answer comes out to 10^14. I suspect I am not doing some step which would heavily suppress my huge survey volume size. Is there a type of normalization I can do to my galaxy power spectrum?
 
  • #4
xdrgnh said:
I checked each derivative myself and none of them have any singularities. My survey volume is of the order of 10^11 and that is why my answer comes out to 10^14. I suspect I am not doing some step which would heavily suppress my huge survey volume size. Is there a type of normalization I can do to my galaxy power spectrum?
There may be some type of observational uncertainty that needs to be added to make the result sensible. What sorts of uncertainties are you assuming in the underlying data?
 
  • Like
Likes xdrgnh
  • #5
kimbyd said:
There may be some type of observational uncertainty that needs to be added to make the result sensible. What sorts of uncertainties are you assuming in the underlying data?

In the survey a redshift error of (delta)z=.001(1+z) is assumed. Residual noise is explicitly neglected. For the fiducial parameters the uncertainties are omega M +/ .0096, h +/ .0075. The error bars for each parameter respectively I'm supposed to get are +/ .0015 and +/ .0010. Now as I write down the fiducial error bars are they supposed to play some role in calculating my Fisher Matrix?
 
Last edited:
  • #6
xdrgnh said:
In the survey a redshift error of (delta)z=.001(1+z) is assumed. Residual noise is explicitly neglected. For the fiducial parameters the uncertainties are omega M +/ .0096, h +/ .0075. The error bars for each parameter respectively I'm supposed to get are +/ .0015 and +/ .0010. Now as I write down the fiducial error bars are they supposed to play some role in calculating my Fisher Matrix?
The parameter errors shouldn't be related. Only the experimental ones.

The redshift error is usually the least significant source of error for such surveys. What are the other observables? And are you making sure to use a data set that is stochastic?
 
  • Like
Likes xdrgnh
  • #7
kimbyd said:
The parameter errors shouldn't be related. Only the experimental ones.

The redshift error is usually the least significant source of error for such surveys. What are the other observables? And are you making sure to use a data set that is stochastic?
My observables are the Hubble Constant H(z) and the angular diameter Da(z). From those observables the that fisher matrix is propagated to a fisher matrix for the parameters Omega M and little h. My problem is that the derivative of the log of the power spectrum is not a small enough number to balance the large Vsurvey. There is one parameter they give in the paper that I haven't utilized. They say that the number of galaxy observed is 50*10^6. Now I initially thought that number is used to calculate the number density. However do you think it can be used to off set the huge Vsurvey which depending on the z values is between 10^9 and 10^11?
 
  • #8
Oh and I propagate the matrix in the following way

F_11=(f_11)*D[H,M]^2+2*D[Da,M]*D[H,M](f_12)+(f_22)(D[Da,M])^2

F_22=(f_11)*D[H,h]+2*D[Da,h]*D[H,h]*(f_12)+(D[Da,h]^2)*f_22

F_12=F_21=(f_22)*D[Da,M]*D[Da,h]+(f_12)*D[Da,M]*D[H,h]+(f_21)*D[H,M]*D[Da,h]+(f_11)D[H,h]*D[H,M]

where M is the omega mass. q1 is the M and q2 is h. p1 is H p2 is Da.
Does this look like a faithful representation of the last formula?
 
  • #9
xdrgnh said:
Oh and I propagate the matrix in the following way

F_11=(f_11)*D[H,M]^2+2*D[Da,M]*D[H,M](f_12)+(f_22)(D[Da,M])^2

F_22=(f_11)*D[H,h]+2*D[Da,h]*D[H,h]*(f_12)+(D[Da,h]^2)*f_22

F_12=F_21=(f_22)*D[Da,M]*D[Da,h]+(f_12)*D[Da,M]*D[H,h]+(f_21)*D[H,M]*D[Da,h]+(f_11)D[H,h]*D[H,M]

where M is the omega mass. q1 is the M and q2 is h. p1 is H p2 is Da.
Does this look like a faithful representation of the last formula?
Sorry for the delayed response. Haven't been checking my e-mail over the 4th of July weekend.

Unfortunately I'm not really willing to spend the time required to parse and understand an equation this complicated, though you might want to consider using LaTeX to display equations like this (see the LaTeX link near the bottom of every post page for instructions).

All I can suggest are general debugging tips:
1. Simplify the system to one that you can fully understand, and make sure your Fisher matrix code does the expected thing. If it doesn't behave as expected, you can use that to debug. For example, you might reduce the system to only a handful of data points (say, two or three), and see if you can't come up with an alternative method of obtaining the result by hand with so few data points.
2. Make sure the scaling of the system has the expected result. For example, if you halve the number of data samples, your errors should be increased by a factor of approximately ##\sqrt{2}##. Note that for this to work you can't change the overall properties of the dataset: you'll get a very different answer if you selectively remove only the nearby data samples. See if you can come up with other scalings that the answer should respect. If you get a discrepancy, you can use that to debug the problem.
 
  • #10
kimbyd said:
Sorry for the delayed response. Haven't been checking my e-mail over the 4th of July weekend.

Unfortunately I'm not really willing to spend the time required to parse and understand an equation this complicated, though you might want to consider using LaTeX to display equations like this (see the LaTeX link near the bottom of every post page for instructions).

All I can suggest are general debugging tips:
1. Simplify the system to one that you can fully understand, and make sure your Fisher matrix code does the expected thing. If it doesn't behave as expected, you can use that to debug. For example, you might reduce the system to only a handful of data points (say, two or three), and see if you can't come up with an alternative method of obtaining the result by hand with so few data points.
2. Make sure the scaling of the system has the expected result. For example, if you halve the number of data samples, your errors should be increased by a factor of approximately ##\sqrt{2}##. Note that for this to work you can't change the overall properties of the dataset: you'll get a very different answer if you selectively remove only the nearby data samples. See if you can come up with other scalings that the answer should respect. If you get a discrepancy, you can use that to debug the problem.
Thank you so much for taking the time to write out this detailed response. I greatly appreciated all of your help. I'm happy to say that I found out what I was doing wrong, specifically D[Log[Pobs,H]] should of been written as D[Log[Pobs],H]. Also a few steps that were not explicitely mentioned in the paper I wasn't doing. Thankfully the authors were able to explain them to me. Mainly I had to evaluate the power spectrum at values of z in between the bins and the effective volume had to be evaluated at bin widths. I was able to reproduce there results and I can move on to the next part of my research. Speaking of that I'm about to ask another question.
 
  • #11
xdrgnh said:
Thank you so much for taking the time to write out this detailed response. I greatly appreciated all of your help. I'm happy to say that I found out what I was doing wrong, specifically D[Log[Pobs,H]] should of been written as D[Log[Pobs],H]. Also a few steps that were not explicitely mentioned in the paper I wasn't doing. Thankfully the authors were able to explain them to me. Mainly I had to evaluate the power spectrum at values of z in between the bins and the effective volume had to be evaluated at bin widths. I was able to reproduce there results and I can move on to the next part of my research. Speaking of that I'm about to ask another question.
Great! Glad to hear you figured it out!
 

Related to Fisher Forecasting For EUCLID Survey Help

What is Fisher forecasting for EUCLID survey and why is it important?

Fisher forecasting for EUCLID survey is a statistical method used to predict the performance and accuracy of the EUCLID telescope in measuring cosmological parameters. It uses the Fisher matrix, which quantifies the amount of information contained in a dataset, to estimate the uncertainties and correlations in the measurements. This is important because it allows us to optimize the design of the survey and determine the potential scientific impact of the data collected.

How does Fisher forecasting for EUCLID survey work?

Fisher forecasting for EUCLID survey works by first constructing a forecast model that includes all the relevant cosmological parameters and their expected values. The Fisher matrix is then calculated by taking the derivatives of the model with respect to each parameter. This matrix is then inverted to obtain the covariance matrix, which contains information about the uncertainties and correlations in the measurements.

What are the limitations of Fisher forecasting for EUCLID survey?

One limitation of Fisher forecasting for EUCLID survey is that it assumes a linear relationship between the cosmological parameters and the observables. This may not hold true in more complex and non-linear models. Additionally, the forecast is only as accurate as the model used, so any discrepancies or uncertainties in the model can affect the results.

How can Fisher forecasting be used to improve the EUCLID survey?

Fisher forecasting can be used to optimize the design of the EUCLID survey by determining the optimal survey parameters such as the survey area, depth, and observing strategy. It can also help identify which cosmological parameters are most sensitive to the survey and therefore warrant further study. Additionally, it can be used to compare and evaluate different survey strategies to determine the most efficient and informative approach.

Are there any other applications of Fisher forecasting in astronomy?

Fisher forecasting is a widely used technique in astronomy and has applications in other areas such as galaxy surveys, cosmological parameter estimation, and gravitational wave astronomy. It is also used to predict the performance of future telescopes and missions, helping to guide their design and scientific goals.

Similar threads

Replies
8
Views
2K
Replies
1
Views
868
Replies
59
Views
6K
Replies
1
Views
1K
  • Cosmology
Replies
4
Views
2K
  • Advanced Physics Homework Help
Replies
8
Views
1K
Replies
1
Views
1K
Replies
1
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
2
Views
1K
Replies
1
Views
695
Back
Top