- #1
JaWiB
- 285
- 0
I have an experiment where I calculate a value based on a waveform acquired by a digitizing oscilloscope. Since the signal is very noisy, I need to average the results many times to get a reasonably accurate value for each measured data point. My question is how should I spend the time averaging (or whether it even matters). On the one hand, I can take many waveforms and then average them together (the digitizer has a function to do this). On the other hand, I could take one waveform and perform my calculation (roughly speaking, just a numerical integration over time) and average many results of the calculation.
It seems to me that both methods should be mathematically equivalent as basically a sum over integrals or an integral over sums, but I can't help but feel like I'm missing something.
It seems to me that both methods should be mathematically equivalent as basically a sum over integrals or an integral over sums, but I can't help but feel like I'm missing something.