- #1
BillKet
- 313
- 29
Hello! I have data from a spectroscopy experiment of the form ##(\lambda,N)## where ##\lambda## is the laser wavelength and ##N## is the number of photon counts for that given wavelength. The structure of the obtained spectrum is pretty simple, having a few peaks that need to be fitted with a Lorenz distribution. If I account only for the statistical nature of the experiment, the wavelength can be considered fixed (no errors associated to it) while the number of counts would have a Poisson error i.e. ##\sqrt{N}##. Doing the fit like this works well and I get reasonable values (using available fitting packages like the ones in Python for chi-square minimization). However I would like to now add the effects of the non-statistical uncertainties. On the x-axis, they come from the error on the laser frequency, ##\Delta \lambda## and on the y-axis from the uncertainty in the laser intensity ##\Delta I##. For the y-axis we can assume that number of counts is proportional to the intensity so ##\Delta N_I = \frac{\Delta I}{I}N## with ##\Delta I## and ##I## being known. How should I include these error in my analysis? What I am thinking to do is to just add these error to the values I have, so for the x-axis I have for each point ##\lambda \pm \Delta \lambda## and for the y-axis ##N \pm \sqrt{N+(\frac{\Delta I}{I}N)^2}##, where for the counts the first term under the radical is the statistical error (which was there before, too), while the second one is the non-statistical one. So in the end I have a data set with error on both x and y variable. I can also easily fit this using Python, but is my approach correct? Am I accounting the right way for these extra errors? Thank you!