- #1
NoobixCube
- 155
- 0
Hi,
I need some help getting started with a problem I have.
A spectrograph records the Radial velocity of a star to find if it has a planet. It has an intrinsic systematic error of say [tex] 4ms^{-1}[/tex]. The error of experimentation such as 'stellar jitter' is added in quadrature to this systematic error. The data is fitted to a 7 parameter function through nonlinear regression. How will the error change on a single fitted parameter if the intrinsic systematic error of the equipment is reduced to say [tex] 1ms^{-1}[/tex]?
Where do I begin to investigate this?
I need some help getting started with a problem I have.
A spectrograph records the Radial velocity of a star to find if it has a planet. It has an intrinsic systematic error of say [tex] 4ms^{-1}[/tex]. The error of experimentation such as 'stellar jitter' is added in quadrature to this systematic error. The data is fitted to a 7 parameter function through nonlinear regression. How will the error change on a single fitted parameter if the intrinsic systematic error of the equipment is reduced to say [tex] 1ms^{-1}[/tex]?
Where do I begin to investigate this?