- #1
Aidyan
- 182
- 14
- TL;DR Summary
- A brief clarification on the definition of a random variable and probability in quantum physics.
In a line of reasoning that involves measurement outcomes in quantum mechanics, such as spins, photons hitting a detection screen (with discrete positions, like in a CCD), atomic decays (like in a Geiger detector counting at discrete time intervals, etc.), I would like to define rigorously the notion of 'random variable' and 'probability of outcomes' in quantum physics. I defined it as follows.
Let us consider a discrete random variable ##X##--that is, a measurable function ##X: \Omega \rightarrow K##, with ##\Omega## a finitely countable sample space of possible outcomes and ##K## a measurable space--and ##P(X)## its discrete probability distribution (PD) (here only discrete PDs are assumed) defined as the set of probabilities that ##X## takes on the non-zero probability values ##x_{i}## ##(i=1,..,C)## as: $$P(X=x_{i})=p_{i}=\frac{n_{i}}{N},$$ with ##n_{i}## the number of events relative to the i-th possible outcome, ##N## the overall number of events or measurements, and ##C## the number of all possible outcomes, such that, in the limit of the large numbers (##N \rightarrow \infty##), for a normalized PD, ##\sum_{i} p_{i}=1##.
However, I'm told this is not clear mathematical language. Is there something wrong or missing with such a statement?
Moreover, I'm told that one can have a probability space and well-defined random variables without appealing to a frequentest interpretation.
But, while it is true that in a very general context one must not necessarily appeal to a frequentest interpretation, nevertheless, once specified that we are dealing with events in the context of quantum mechanics, don't we always imply a frequentest interpretation of the measurements?
Am I’m missing something and can the definition be made more rigorous?
Let us consider a discrete random variable ##X##--that is, a measurable function ##X: \Omega \rightarrow K##, with ##\Omega## a finitely countable sample space of possible outcomes and ##K## a measurable space--and ##P(X)## its discrete probability distribution (PD) (here only discrete PDs are assumed) defined as the set of probabilities that ##X## takes on the non-zero probability values ##x_{i}## ##(i=1,..,C)## as: $$P(X=x_{i})=p_{i}=\frac{n_{i}}{N},$$ with ##n_{i}## the number of events relative to the i-th possible outcome, ##N## the overall number of events or measurements, and ##C## the number of all possible outcomes, such that, in the limit of the large numbers (##N \rightarrow \infty##), for a normalized PD, ##\sum_{i} p_{i}=1##.
However, I'm told this is not clear mathematical language. Is there something wrong or missing with such a statement?
Moreover, I'm told that one can have a probability space and well-defined random variables without appealing to a frequentest interpretation.
But, while it is true that in a very general context one must not necessarily appeal to a frequentest interpretation, nevertheless, once specified that we are dealing with events in the context of quantum mechanics, don't we always imply a frequentest interpretation of the measurements?
Am I’m missing something and can the definition be made more rigorous?