- #1
BillKet
- 313
- 29
Hello! Can someone help me understand how are confidence intervals for some parameters of a fit different from the errors on the parameters obtained, for example, from the error matrix. I read Bevington and the whole book he mentions that we can use the error from the error matrix to define the confidence interval (e.g. ##68.3\%## confidence interval for 1 ##\sigma## of a parameter), then in the last chapter he says that, this is not generally correct and we should use confidence intervals which automatically take into account the correlation between parameters. I understand his argument and it makes sense to do that, but now I am not sure I understand what is the error matrix useful for anymore, if the estimates from the error matrix don't take into account the correlations among the parameters? I guess they are useful when the correlations are zero, but does that happen often? Thank you!