Reliability of Dynamic Probability Models: Understanding and Calculating

  • Thread starter disregardthat
  • Start date
  • Tags
    Probability
In summary, the conversation discusses the concept of defining a reliability parameter for a model of the probability of an event. This involves measuring how well the model predicts a given set of data and the decision-making process behind it. The idea is to have a constantly changing model that adapts and updates its probabilities based on real-world data. One method for this is Bayesian probability, which uses a prior distribution to model the probability of a probability. The conversation also touches on the significance of the errors in predictions and the need to define a measure of comparison between the model and the data.
  • #1
disregardthat
Science Advisor
1,866
34
I'm interested in the question of defining a reliability parameter of a model of the probability of an event.

Say you're tossing a fair coin, with a 50% chance of heads. Your model tells you it's 40% of the coin showing heads. How reliable is your model?

Say you're tossing coins A1,A2,... each with probability P1,P2,... of showing heads. Your model tells you it's T1,T2,... How reliable is your model?

Is there a concept in statistics concerning the reliability of events?

I'm trying to figure out a method of determining the reliability of a model that has constantly changing probabilities, but I'd first like to know the basic concept to develop a method of calculating its reliability.
 
Physics news on Phys.org
  • #2
I think that depends on your requirements.

Are you interested in the absolute difference between predicted and real outcomes? => The model is "10% wrong".
Are you interested in the relative difference between predicted and real outcomes, relative to the actual result? => The model is "20% wrong"
...
 
  • #3
disregardthat said:
Is there a concept in statistics concerning the reliability of events?
There are many such concepts. For example, statistical significance, statistical hypothesis testing, statistical inference. These break down further depending on whether one ascribes to a Bayesian or frequentist approach to statistics.
 
  • #4
disregardthat said:
I'm interested in the question of defining a reliability parameter of a model of the probability of an event.

Say you're tossing a fair coin, with a 50% chance of heads. Your model tells you it's 40% of the coin showing heads. How reliable is your model?

Your example suggests that your concept of "reliability" involves measuring how well a model predicts a given set of data. There is no universally correct way to measure how well a model predicts data. You can define a measure of error in various ways. In your example, is the error of predicting "H" when the data is "T" the same "size" error as predicting "T" when the data is "H"? There is no law of mathematics or logic that tells whether these errors are equally serious. (For example, in medicine we can thing of "H" as having a disease and "T" as not having it. A test predicting "T" when the actual result is "H" makes an error that has different consequences that a test predicting "H" when the actual result is "T".)

The best way to investigate an appropriate definition of "reliability" is to ask what decisions you would make on the basis of the measure.

Since a stochastic model doesn't make definite predictions, even if you define a measure of error between data and definite set of predictions, you still must define how you will compare the model to data. Your example suggests you might be thinking of running the model once and getting a definite set of predictions. Is that your idea?
 
  • #5
Stephen Tashi said:
Your example suggests that your concept of "reliability" involves measuring how well a model predicts a given set of data. There is no universally correct way to measure how well a model predicts data. You can define a measure of error in various ways. In your example, is the error of predicting "H" when the data is "T" the same "size" error as predicting "T" when the data is "H"? There is no law of mathematics or logic that tells whether these errors are equally serious. (For example, in medicine we can thing of "H" as having a disease and "T" as not having it. A test predicting "T" when the actual result is "H" makes an error that has different consequences that a test predicting "H" when the actual result is "T".)

The best way to investigate an appropriate definition of "reliability" is to ask what decisions you would make on the basis of the measure.

Since a stochastic model doesn't make definite predictions, even if you define a measure of error between data and definite set of predictions, you still must define how you will compare the model to data. Your example suggests you might be thinking of running the model once and getting a definite set of predictions. Is that your idea?

Yep, my idea is definitely that the model is constantly compared to what it attempts to model. Furthermore, the model is constantly adapting to this, changing the probabilities based on what occurs in order to sharpen future predictions.

What I had in mind would be something of the sort: instead of the probabilities T1,T2,... in my example, T1,T2,... would each be a random variable with a probability distribution of the probability. Maybe it's a fair assumption that they are normal distributed. In that case I'd like to have a median MX and a standard deviation SX for each TX.

As I had in mind the TX's constantly changing to adapt to the situation, I'd want the parameters MX and SX to equally change, with SX being a measure of how little or much the probability changes, i.e. the "reliability" of MX.

Thus SX could be the reliability I am searching for. However I don't know if a normal distribution is a reasonable assumption, so more generally I am asking for a method that might find an optimal distribution instead with a corresponding measure of reliability. I think the TX's have a tendency of decreasing faster than they increase over time, with a wave-like shape globally. So a more fitting distribution might be a skewed one. But from what I gather from your post there might not be a 'customized' distribution depending on how the variables changes.

Basically, what I practically would want to have out of this is a confidence interval of the probability (of say, 95%) instead of a fixed value, but that can of course be drawn from the distributions TX.
 
Last edited:
  • #7
The way that a "probability of a probability" is usually modeled is with a Bayesian prior distribution. A random variable is assume to be from a family of distributions (such as gaussians) and some "prior" probability distribution is assigned for the parameters of the family of distributions( such a probability distribution on the mean and variance of the gaussians). The distributions on the parameters are updated from data using Bayes Theorem. There are many different ways to implement this general approach, so it still doesn't direct you to a particular procedure. You should first study the general method and then decide what fits your particular problem. A somewhat advanced book on this subject is Jayne's "Probability The Logic Of Science". There are pages related to this book on the web. I don't know whether the whole book is still available online. It used to be.
 
  • #8
Thanks for the help, I'll look into these references.
 

FAQ: Reliability of Dynamic Probability Models: Understanding and Calculating

What is the meaning of reliability in terms of probability?

Reliability in terms of probability refers to the consistency and accuracy of the results obtained from a probability experiment or calculation. It indicates the likelihood of obtaining the same or similar results when the experiment is repeated.

How is reliability measured in probability?

Reliability in probability is measured by the confidence level or confidence interval. This is a range of values within which the true probability is likely to lie. The higher the confidence level, the more reliable the result is considered to be.

Can a probability be 100% reliable?

No, a probability can never be 100% reliable. This is because all probabilities are based on assumptions and there is always a possibility of error or uncertainty. However, a higher confidence level indicates a higher level of reliability.

How does sample size affect the reliability of a probability?

Generally, a larger sample size leads to a more reliable probability. This is because a larger sample size reduces the impact of random fluctuations and gives a more accurate representation of the entire population. However, other factors such as the quality of the sample and the sampling method also play a role in determining reliability.

What are some factors that can affect the reliability of a probability?

Some factors that can affect the reliability of a probability include the size and quality of the sample, the sampling method used, the assumptions made in the calculation, and the level of confidence chosen. Other external factors such as human error, measurement error, and external influences can also impact the reliability of a probability.

Similar threads

Replies
45
Views
4K
Replies
11
Views
322
Replies
47
Views
4K
Replies
11
Views
2K
Replies
2
Views
2K
Back
Top