Conditional probability on a finite set

In summary, the question is asking if there is a correct way to predict the outcome of a large sequence of coin tosses. The answer is no, because the sequence is finite.
  • #1
billschnieder
808
10
T ≡ "two coins tossed 7 times by two people A and B giving outcomes [itex][A^+B^+, A^-B^+, A^+B^-, A^-B^+, A^+B^+, A^-B^-, A^-B^+][/itex], where + = heads and - = tails"
Calculate [itex]P(A^+B^+|T)[/itex], [itex]P(A^+|T)[/itex], [itex]P(B^+|T)[/itex] and [itex]P(B^+|T,A^+)[/itex]

I asked this question elsewhere and there was a suggestion that the question does not make sense.
 
Physics news on Phys.org
  • #2
2/7, 3/7, 5/7, 2/3 (same order as question).
 
  • #3
mathman said:
2/7, 3/7, 5/7, 2/3 (same order as question).

Thanks for a first feedback; however, almost certainly the main issue is the title! The topic under discussion is about measurements to estimate the expectation values for an infinite sequence by the taking of many samples of a very large size. In view of that, the reaction to that example was:
Here you refer to the statistical analysis of a small sample. However, P usually stands for probability. As in your coin toss example, the probability of head (fair coin) P=0.5 even if you throw for example heads twice.
To elaborate, you could have the following statistical sequence:
++
++
While the statistical result of a few throws was ++ for all throws, this does certainly not mean that the probability of throwing ++ is 1.
- http://en.wikipedia.org/wiki/Law_of_large_numbers
- http://en.wikipedia.org/wiki/Student's_t-distribution
And one of the replies was:
Do you still believe that P(A^+B^+|T) is factorable into P(A^+|T) and P(B^+|T) [..]
What has small data set got to do with it? [the small finite set] "IS the population. Use your law of large numbers to randomly pick from that.

In view of the context of the discussion, the obvious follow-up questions are:

2. Is that the correct way to predict the outcome of a large sequence of coin tosses?
3. What can be said about factorizing the probabilities of the coins?

This kind of issue regularly pops up in discussions. Thus, thanks in advance for any clarifying comments by experts!

PS: The problem may stem in part from the different meanings that people attach to the word "probability", with even disagreeing schools of thought. For example, I define "probability" as in the introduction here:
http://en.wikipedia.org/wiki/Probability
Using that definition, your calculation is not a probability but a statistical result.
 
Last edited:
  • #4
harrylin said:
Thanks for a first feedback; however, almost certainly the main issue is the title!
Because it disagrees with your initial understanding of what you thought the question was about? Then you misunderstood the question.

The topic under discussion is about measurements to estimate the expectation values for an infinite sequence by the taking of many samples of a very large size.
Then you misunderstood the question. I never mentioned anything about an infinite sequence or taking samples. I gave you the set of outcomes which was finite.

2. Is that the correct way to predict the outcome of a large sequence of coin tosses?
This question is relevant only when a large sequence of coin tosses is being discussed. This is not and was not one of such a case. You incorrectly assumed it was and doubled down on that assumption despite my attempts to clarify the question to you.
PS: The problem may stem in part from the different meanings that people attach to the word "probability", with even disagreeing schools of thought. For example, I define "probability" as in the introduction here:
http://en.wikipedia.org/wiki/Probability
Using that definition, your calculation is not a probability but a statistical result.
Indeed I perceived that you thought that. It appears you still think the question as stated above does not make sense, or what do you mean by "is not a probability". Can you or can you not answer it as posed?

But I encourage you to check the first 4 chapters of "Probability Theory: The Logic of Science by ET Jaynes."
 
  • #5
billschnieder said:
[..] This question is relevant only when a large sequence of coin tosses is being discussed. This is not and was not one of such a case. You incorrectly assumed it was and doubled down on that assumption despite my attempts to clarify the question to you.
Can you clarify (for me as well as onlookers) what your question has to do in the context of measurements of as much data as is required for statistically valid estimations of expectation values?
[..] It appears you still think the question as stated above does not make sense, or what do you mean by "is not a probability". [..] But I encourage you to check the first 4 chapters of "Probability Theory: The Logic of Science by ET Jaynes."
I happen to have read those. Funny enough, I suspect that Jaynes would state that what you ask for is not probabilities but statements about a known statistical result. There is no degree of plausibility for those other than 1 or 0 when the even happened or not. Of course I could have overlooked or misunderstood something; if so, please present it. :smile:
 
  • #6
Since P(... |T) (condition on T) was asked, the only thing that matters is the set of outcomes labelled T.
 
  • #7
mathman said:
Since P(... |T) (condition on T) was asked, the only thing that matters is the set of outcomes labelled T.
It depends on what you think may be meant with "P". Please explain what "P" means in your answer. Does it correspond to the likelihood that an event happened or will happen, or to the likelihood of unobserved things?

Note referring to jaynes, as both Bill and I appreciate his book: a probability is not the same thing as a frequency.
 
Last edited:
  • #8
harrylin said:
It depends on what you think may be meant with "P". Please explain what "P" means in your answer. Does it correspond to the likelihood that an event happened or will happen, or to the likelihood of unobserved things?

Note referring to jaynes, as both Bill and I appreciate his book: a probability is not the same thing as a frequency.
P to me simply means probability, a la Kolmogoroff axiom approach. For the example, the sample space has exactly 7 points and the random varables A and B assume values (+ or -) as given on those points.
 
  • #9
billschnieder said:
T ≡ "two coins tossed 7 times by two people A and B giving outcomes [itex][A^+B^+, A^-B^+, A^+B^-, A^-B^+, A^+B^+, A^-B^-, A^-B^+][/itex], where + = heads and - = tails"
Calculate [itex]P(A^+B^+|T)[/itex], [itex]P(A^+|T)[/itex], [itex]P(B^+|T)[/itex] and [itex]P(B^+|T,A^+)[/itex]

I asked this question elsewhere and there was a suggestion that the question does not make sense.

It doesn't make sense because the notation is unclear.

For example, in the expression [itex] P(A^+B^+) [/itex] what is the meaning of the event [itex] A^+B^+ [/itex]? Does this mean the event that A and B both toss heads on the first of 7 tosses? Or does it mean that they both toss heads on at least one of 7 tosses? Or does it mean something else?

Furthermore, it doesn't make sense to speak of a conditional probability unless you have first established the probability space upon which you wish to place the condition. Since you didn't do that, people have to guess what the space is.
 
  • #10
mathman said:
P to me simply means probability, a la Kolmogoroff axiom approach. For the example, the sample space has exactly 7 points and the random varables A and B assume values (+ or -) as given on those points.
Well, the problem arose because I also think that P is meant to indicate probabilities or likelihoods. In contrast the provided data are supposedly factual outcomes (frequencies) - which should be distinguished from probabilities or likelihoods. Roughly speaking: statistical data are known facts while probabilities are bets. Assuming that we understood the information, the likelihood that A+B+ occurred is 1 - it's a sure bet. The frequency with which A+B+ occurred is 2/7, but that's not the same thing.

Based on that sample it may be possible to estimate for example an average expectation value with a certain probability, but I don't think that such was the question.

Bill can you elaborate on what you intended to show about probability calculations? Apparently you wanted to illustrate something about factorisation.
 
Last edited:
  • #11
harrylin said:
Assuming that we understood the information, the likelihood that A+B+ occurred is 1 - it's a sure bet.

It might be a sure bet if A+B+ denoted an event. As I replied to billschneider, the notation A+B+ doesn't describe a specific event. It might mean the event "On a randomly selected toss from the 7 tosses, A and B both throw heads". The space of events we are considering hasn't been defined. It's billschneider that needs to be cross examined, not mathman.
 

Related to Conditional probability on a finite set

1. What is conditional probability on a finite set?

Conditional probability on a finite set refers to the likelihood of an event occurring given that another event has already occurred. It is calculated by dividing the probability of the intersection of the two events by the probability of the first event.

2. How is conditional probability different from regular probability?

Regular probability calculates the likelihood of an event occurring without any prior information. Conditional probability takes into account additional information or events that have already occurred.

3. What is the formula for calculating conditional probability on a finite set?

The formula for conditional probability on a finite set is P(A|B) = P(A ∩ B) / P(B), where P(A|B) represents the probability of event A occurring given that event B has already occurred.

4. Can conditional probability be greater than 1?

No, conditional probability cannot be greater than 1. It is a measure of likelihood and therefore must be between 0 and 1. If the calculated value is greater than 1, it is most likely due to an error in the calculations.

5. How is conditional probability used in real-world applications?

Conditional probability is used in many real-world applications, such as weather forecasting, medical diagnosis, and stock market analysis. It helps us make more accurate predictions by taking into account additional information and events that may affect the outcome.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
197
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
564
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
221
  • Set Theory, Logic, Probability, Statistics
2
Replies
57
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
744
Back
Top