How to Get Final Fisher Matrix from 2 Matrices

In summary, the author suggests that if you have Fisher matrices from two different experiments, you can just add the two matrices and invert the sum to get the Cramer-Rao bound.
  • #1
fab13
318
6
TL;DR Summary
I am looking for a way to make cross-correlation between 2 Fisher matrix and get a final Fisher matrix that, if it is inverted, will give the constraints of this cross-correlation.
I have 2 Fisher matrixes which represent information for the same variables (I mean columns/rows are the same in the 2 matrixes).

Now I would like to make the cross synthesis of these 2 matrixes by applying for each parameter the well known formula (coming from Maximum Likelihood Estimator method) :

$$\dfrac{1}{\sigma_{\hat{\tau}}^{2}}=\dfrac{1}{\sigma_1^2}+\dfrac{1}{\sigma_2^2}\quad(1)$$

##\sigma_{\hat{\tau}}## represents the best estimator representing the combination of a `sample1` (##\sigma_1##) and a `sample2` (##\sigma_2##).

Now, I would like to do the same but for my 2 Fisher matrixes, i.e from a matricial point of view.

For this, I tried to diagonalize each of these 2 Fisher matrix. Then, I add the 2 diagonal matrix and I have so a global diagonal Fisher matrix, but I don't know how to come back in the space of start (since the diagonalization don't give the same combination of eigen vectors for each matrix).

If I could get back in the same time to starting parameters space, I could do matrix products to get the final Fisher matrix by doing :

$$\text{Fisher$_{\text{final,cross}}$} = P.\text{Fisher$_{\text{diag,global}}$}.P^{-1}\quad(2)$$

with ##P## the passing matrixes (composed of eigen vectors) and I could get directly the covariance matrix by inverting ##\text{Fisher}_{\text{final,cross}}##

How can I come back from (2) of the ##\text{Fisher$_{\text{diag,global}}$}## diagonal matrix to starting space, i.e the single parameters ?

My difficulties come from the fact that the diagonlization of the 2 Fisher matrixes will produce different passing matrix ##P_1## and ##P_2##, that is to say, different eigen vectors, so a different linear combination of variables between both. I have written the passing matrix ##P## above but it is not defined, I think an expression of ##P## as a function of ##P_1## and ##P_2## passsing matrixes is the key point of my issue.

There is surely a linear algebra property whih could allow to circumvent this issue of taking into account the 2 different linear combination of variables while being able to come back in the space of start, i.e space of single parameters which represent the Fisher matrixes.


If someone could help me to perform this operation, I hope that I have been clear. If you have any questions, don't hesitate, I would be glad to give you more informations.

Precisions : I take the following notations :

1. ##D## diagonal matrix is the sum of the 2 diagonalized matrix ##D_1## and ##D_2## (from `Fisher1` and `Fisher2` initial matrixes) : ##D=D_1+D_2##2. ##P_1## and ##P_2## are respectively the "passing" matrixes (from `Fisher1` and `Fisher2` diagonalization) composed of eigen vectors.3. ##M_1## is the `Fisher1` matrix and ##M_2## is the `Fisher2` matrix.So, I am looking a way to find an endomorphism ##M## that checks :$$D=P^-1.M.P\quad(3)$$ where #P# is the unkonwn "passing" matrix.So, there are 2 unknown quantities in my issue :

1. The "passing" matrix, i.e the eigenvectors (I am yet trying to build it from ##P_1## and ##P_2## matrixes).

2. The ##M## matrix which represents this endomorphism.

But in this world of unknown quantities, I know however the eigen values of this wanted endomorphism ##M## (they are equal to the diagonal of matrix ##D##).

For the instant, I tried to do the combination between ##P_1## and ##P_2## in order to get (approximately, surely) :

$$P=P_1+P_2$$

such that way, I can compute ##M## cross Fisher matrix like this :

$$M=(P_1+P_2).D.(P_1+P_2)^{-1}$$

But after inverting ##M## (to get cross covariance matrix), constraints are not good (sigma greater than it was expected, for example greather than sigma given by only one matrix).

Would anyone help me to find a way to build this ##P## "passing" matrix from ##P_1## and ##P_2## ? As you can see, a simple sum is not enough.

If an exact building of ##P## from ##P_1## and ##P_2## is not possible, is there a way to approximate it ?

Regards
 
Last edited:
Physics news on Phys.org
  • #2
Maybe I have a track that could deserve to be explored : the pooled covariance matrix

Someone suggested me to diagonlise each of the 2 Fisher matrices and to take the average of diagonal elements for both and build a unique diagonal Fisher matrix.

After, I go back to "parameters space" by doing : ##F_{\text{param}}=P.Fisher_{\text{new}}.P^{-1}##

but I am still locked on the passing matrix to take : the first one of the first diagonalisation or the second one ?

Are there criteria to :

1) choose the average of the diagonal elements : why 0.5 for both and why not 0.6/0.4 or 0.7/0.3 contribution ?

2) Choose a correct passing matrix (first or second one came from the 2 diagonalisation)

?

Any suggestion is welcome.
 
  • #3
I'm completely confused by your post. Do you have two sample Fisher matrices under the same model? If so, why not just combine the underlying data samples into a single sample and compute the Fisher matrix on that? Or are you talking about the true Fisher matrix under two different models?

This guide says that if you have Fisher matrices from two different experiments, you can just add the two matrices and invert the sum to get the Cramer-Rao bound (http://wittman.physics.ucdavis.edu/Fisher-matrix-guide.pdf).
 
  • #4
I have already done a simple sum of the 2 matrices : this gives, in my context of forecast, good constraints. But we can do better, with cross-correlations. A first study allowed us to improve constraints by introducing nuisance parameters which are respectively the cosmological bias of spectroscopic probe for the first Fisher matrix and the cosmological bias for photometric probe for the second Fisher matrix and force them to have a ratio bias_spec/bias_phot constant.

1) Now, I realized I may have to put priors on the bias_phot deduced from bias_spectro (bias_phot seem to be better constraints).

But in this introduction of priors, I have firstly to diagonalise the 2 Fisher matrices, haven't I ?

Otherwise, how could I introduce these priors ?


2) I was suggested to use another method by multiply the bias_phot by the factor (bias_spec_fiducial/bias_phot_fiducial) but I think this is not involved into my Fisher synthesis : what di you think about it ?

Nevertheless, thanks for your remark, this is the first answer I have had concerning this thread.
 
  • #5
Fisher information is the curvature of the likelihood function with respect to the model parameters. Maybe I don't know your field well enough to understand your problem, but it looks like the easiest thing to do would be to write down a single joint model which combines the two models you used to generate each of the Fisher matrices and then compute the Fisher information under that joint model. Otherwise, if you don't know the correlations between the data the generated the two Fisher matrices, you have no principled way to know how to combine them.
 
  • #6
I make following to the point where I was up to now.

The goal is to combine the informations contained into 2 different Fisher matrix to get a cross-correlated unique Fisher matrix. a friend suggested me to take the average of each diagonal of diagonalised matrices and do the sum of the 2 new Fisher matrices. This is called "the pooled variance/covariance" method.

I have firstly applied this average contribution and obtain better constraints on my parameters (by taking the inverse of final Fisher matrix). The amount of accuracy is determined by the "FoM" quantity : the more is high, the smaller are the constraints.

Now I would like to refine the criterion for choosing this value corresponding to the average, i.e by applying a `0.5 factor on each diagonal matrices elements. (above, the relative factor of contribution = 0.5 for each diagonal matrix).

For this, I have covered all the possible contribution by making vary the "alpha" quantity comprised between 0 and 1 and that represents this relative contribution between the 2 diagonal matrices.

In python, this corresponds to :

Code:
 for alpha in np.arange(0,1,0.01):

      FISH_eigen_sp_flat = alpha*np.linalg.inv(np.diag(eigen_sp_flat)) + (1-alpha)*np.linalg.inv(np.diag(eigen_xc_flat))

      FISH_eigen_xc_flat = FISH_eigen_sp_flat

Below a figure showing the results :

Capture d’écran 2020-10-12 à 21.22.53.png


If I take alpha = 0.5` (as suggested by a colleague when he talked about taking the average), I get a `FoM = 1438`.Another result obtained by a different method (from physics hypothesis and not mathematical like here) gives a `FoM = 1567 : we are pretty near to a same result (mostly, constraints derived from these 2 FoM are closed, so consistent).

This is better than my previous constraints but I would like to know why can't we choose another value for `alpha` parameter and mostly, which criterion could I choose to be as possible as it can to do a right choice for this parameter.

`Someone gave to me a paper talking about "pooled matrix" but I have difficulties to extract useful methods cited to apply them directly in my case :

A Two-Stage Approach to Synthesizing Covariance Matrices in Meta-Analytic Structural Equation Modeling

I think there is in this paper a study on the heterogeneity of correlation or covariance matrices used to make synthesis : this is a little bit yet unclear in my mind for the moment.

Anyone could help me to find a criterion for the choice of alpha parameter value which is justified in the combination of my 2 Fisher matrices from the figure above instead of simply taking the average of the 2 diagonal matrix, i.e taking alpha = 0.5` ?

Have I got to study the heterogeneity of the 2 Fisher matrices (or covariance matrices) and if yes, how to perform this study ?

Any suggestion/remark/help is welcome.
 
  • #7
The main argument I have seen on "alpha=0.5" choice is that first Fisher matrix brings as much as second Fisher matrix from a Fisher's formalism point of view : what do you think about this argument ?
 
  • #8
Is really no one who could bring its expertise on how to quantify the parameter ##\alpha## ? (taken for the moment to ##0.5## if I want to do the average).
 
  • #9
fab13 said:
Is really no one who could bring its expertise on how to quantify the parameter ##\alpha## ? (taken for the moment to ##0.5## if I want to do the average).

There are forum members who can answer questions about statistics provided the statistical model for the data is given and provided the question asked is a well defined mathematical question .

However, there may not be any members who can infer what statistical model is being used or the format of the data from terminology like "cosmological bias of spectroscopic probe" or "cosmological bias for photometric probe". Also, your question is phrased as a generality - i.e. "how to quantify the parameter ##\alpha##? ", which is not a specific mathematical question.

To get an answer to your question, I think you will have to teach some cosmology and learn the different mathematical criteria used to judge whether one estimating procedure is better than another. (e.g. maximum liklihood, minimum variance, unbiased, least squares, etc.)

 
  • #10
There are forum members who can answer questions about statistics provided the statistical model for the data is given and provided the question asked is a well defined mathematical question .

However, there may not be any members who can infer what statistical model is being used or the format of the data from terminology like "cosmological bias of spectroscopic probe" or "cosmological bias for photometric probe". Also, your question is phrased as a generality - i.e. "how to quantify the parameter ? ", which is not a specific mathematical question.

Sorry to not have provide the context in which I try to do cross-correlations between a spectroscopic (access to radial information = 3D = 3 dimensions) and photometric (angular coordinates = 2D = 2 Dimensions) probes.

Initialy, I thought my issue was a pure problem of statistics but now, I realize that maybe this post should be transferred into Cosmology forum : I make call to the moderators, what do you think about this changing of forum ?

Often, cosmology is about statistics but as said @Stephen Tashi, without explaining the underlying physical model, it's complicated to do "physical statistics".

Anyway, I wish find suggestions from people who work on this kind of field, this is the main goal.

@Stephen Tashi, Thank you for your remark, I hope to have been more clearer.
 
  • #11
Just a question concerning the transfer of this thread : can I do it myself by just doing a copy ? but I can't copy all the thread and I want to avoid duplicates.
 
  • #12
fab13 said:
Just a question concerning the transfer of this thread : can I do it myself by just doing a copy ? but I can't copy all the thread and I want to avoid duplicates.

To get the attention of moderators, you should use the "report" option shown below your post. A report of a post need not be a report about something bad.
 

FAQ: How to Get Final Fisher Matrix from 2 Matrices

1. How do I calculate the final Fisher matrix from two matrices?

The final Fisher matrix can be calculated by taking the sum of the two individual Fisher matrices. This can be represented mathematically as F_final = F_1 + F_2, where F_final is the final Fisher matrix and F_1 and F_2 are the individual matrices.

2. What is the purpose of combining two Fisher matrices?

Combining two Fisher matrices allows for a more accurate and comprehensive estimation of the uncertainties in a parameter space. It takes into account the information from both matrices and provides a more robust representation of the data.

3. Is there a specific method for combining two Fisher matrices?

There are several methods for combining two Fisher matrices, including the sum method, product method, and inverse-sum method. The most commonly used method is the sum method, where the two matrices are simply added together.

4. Can I combine more than two Fisher matrices?

Yes, you can combine any number of Fisher matrices using the same method of addition. However, as the number of matrices increases, the resulting final Fisher matrix may become more difficult to interpret.

5. Are there any limitations to combining Fisher matrices?

Combining Fisher matrices assumes that the underlying data is Gaussian and that the uncertainties are uncorrelated. If these assumptions are not met, the resulting final Fisher matrix may not accurately represent the data and its uncertainties.

Back
Top