- #1
- 7,861
- 1,599
Another thread in this section (https://www.physicsforums.com/showthread.php?t=619945) raises some interesting questions. I can't figure what the original poster there is asking, so I think it best to put what I'm asking in a new thread.
A simple "field test" of a system that detects a target would be to position the target (such as an aircraft) at a great distance from the detector (such as a rader) and then move the target closer in a straight line. Repeating such a test and recording the distance at which detection first takes place gives a distribution of ranges. A typical analysis of the results is to fit a curve F(x) to the cumulative histogram of the ranges. I've often seen the curve F(x) mis-named as "The Probability Of First Detection Vs Range". It actually is "The Cumulative Distribution Of Range At Which First Detection Occurs".
Suppose we wish to write a stochastic simulation of the above test that depicts events as the test progresses, We divide the total time of the test up into equal intervals and these define the "steps" of our simulation. We wish to simulate the first detection by making a independent random draw from a benoulli random variable at each step until a detection occurs. (i.e. this variable represents "first detection occurs" or "first detection does not occur".) In general, the bernoulli random variables will have different distributions at different steps. We want the distribution of first detection ranges that we get from the simulation to match the curve F(x). How do we solve for the distributions of the appropriate bernoulli random variables?
In the codes of many simulations and in many technical reports, I have seen the following done:
At step N, find the distance x between the detector and the target and let the probability of a first detection at that step be F(x). ( I have also seen people use F'(x)). This is a nonsensical model. If you divide a given interval up into even finer intervals and apply it, you increase the probability of detection within that interval since you do more independent random draws.
So what ought to be done?
A simple "field test" of a system that detects a target would be to position the target (such as an aircraft) at a great distance from the detector (such as a rader) and then move the target closer in a straight line. Repeating such a test and recording the distance at which detection first takes place gives a distribution of ranges. A typical analysis of the results is to fit a curve F(x) to the cumulative histogram of the ranges. I've often seen the curve F(x) mis-named as "The Probability Of First Detection Vs Range". It actually is "The Cumulative Distribution Of Range At Which First Detection Occurs".
Suppose we wish to write a stochastic simulation of the above test that depicts events as the test progresses, We divide the total time of the test up into equal intervals and these define the "steps" of our simulation. We wish to simulate the first detection by making a independent random draw from a benoulli random variable at each step until a detection occurs. (i.e. this variable represents "first detection occurs" or "first detection does not occur".) In general, the bernoulli random variables will have different distributions at different steps. We want the distribution of first detection ranges that we get from the simulation to match the curve F(x). How do we solve for the distributions of the appropriate bernoulli random variables?
In the codes of many simulations and in many technical reports, I have seen the following done:
At step N, find the distance x between the detector and the target and let the probability of a first detection at that step be F(x). ( I have also seen people use F'(x)). This is a nonsensical model. If you divide a given interval up into even finer intervals and apply it, you increase the probability of detection within that interval since you do more independent random draws.
So what ought to be done?