- #1
tworitdash
- 108
- 26
This is a surface level question and I don't want to go into detail.
Imagine an algorithm which when used with a sensor output gives the statistical moments of a variable in nature (for example mean and standard deviation of a variable). The sensor measures this once in a while (like once in a time interval ## t_0 ## ). However, whenever it is measured, it is measured so fast that the algorithm doesn't have more samples to estimate the moments. If we assume that the spectrum of this variable is Gaussian, the estimated mean is quite accurate, but this is not the case always. With a very low number of samples, the estimation of standard deviation is much worse, even when the spectrum is Gaussian.
Therefore, fundamentally even if we have a model of this variable in time (state space) and we have measurements once in a while (measurement model), our estimation algorithms will result in wrong values of mean and standard deviation because the measurement itself can have huge errors. This measurement also will be very dependent on the spectrum of the variable, which can be any function (non-Gaussian).
I do not understand quantum mechanics too well, but when I saw estimation techniques with weak measurements, I got the idea that a weak measurement is also something that doesn't carry more information regarding the variable (similar to the problem I mentioned. The sensor doesn't interact with it for an adequate amount of time to resolve the moments and therefore has weak values).
Can the same estimation techniques that are used for weak measurement of quantum systems be used in the problems I explained here? As the low number of samples results in lower power and higher uncertainties (inaccurate values of standard deviation) in the measurement, can we treat it as a weak measurement? If so, can I relate the estimation techniques mathematically to the problem I have?
I know this is a surface level question, but I am sure the idea is transparent to everyone. Please share your views on this.
Imagine an algorithm which when used with a sensor output gives the statistical moments of a variable in nature (for example mean and standard deviation of a variable). The sensor measures this once in a while (like once in a time interval ## t_0 ## ). However, whenever it is measured, it is measured so fast that the algorithm doesn't have more samples to estimate the moments. If we assume that the spectrum of this variable is Gaussian, the estimated mean is quite accurate, but this is not the case always. With a very low number of samples, the estimation of standard deviation is much worse, even when the spectrum is Gaussian.
Therefore, fundamentally even if we have a model of this variable in time (state space) and we have measurements once in a while (measurement model), our estimation algorithms will result in wrong values of mean and standard deviation because the measurement itself can have huge errors. This measurement also will be very dependent on the spectrum of the variable, which can be any function (non-Gaussian).
I do not understand quantum mechanics too well, but when I saw estimation techniques with weak measurements, I got the idea that a weak measurement is also something that doesn't carry more information regarding the variable (similar to the problem I mentioned. The sensor doesn't interact with it for an adequate amount of time to resolve the moments and therefore has weak values).
Can the same estimation techniques that are used for weak measurement of quantum systems be used in the problems I explained here? As the low number of samples results in lower power and higher uncertainties (inaccurate values of standard deviation) in the measurement, can we treat it as a weak measurement? If so, can I relate the estimation techniques mathematically to the problem I have?
I know this is a surface level question, but I am sure the idea is transparent to everyone. Please share your views on this.
Last edited: