What should Sleeping Beauty's credence be in a life-or-death coin toss?

  • B
  • Thread starter Moes
  • Start date
  • Tags
    Beauty
In summary, SB was not willing to accept a definition that says that credence must always follow the betting odds.
  • #36
@Moes, Her credence is equal to her assessment of the probability; that doesn't change between Sunday and Wednesday ##-## on Sunday she knows that ##P(heads)## = 1/2, and ##P(heads|awake)## = 1/3, just as she does when the ##awake## condition is being fulfilled. If asked on Sunday, what is the probability that the coin toss is heads, she will report ##P(heads)## = 1/2 and if asked what will you report if asked the same question when you are awakened on Monday or Tuesday, she will say ##P(heads|awake)## = 1/3, because there are 2 chances for tails; i.e. tails Monday or tails Tuesday, but only one for heads, i.e. heads Monday.
 
Physics news on Phys.org
  • #37
Moes said:
Ok, one way I think I could explain it, is that you can only let your belief depend on the conditions if the conditions could have been different. In this case she could not have been thinking about the probabilities when she was sleeping. So the condition that she is awake can not be used to decide her credence. The probability of the coin toss was 50/50 the condition that she is now awake shouldn’t change anything. So the probability should remain 50/50
Why? There is nothing in any definition of credence that requires that. In fact, to me it seems the opposite. If the condition could not be different then you cannot use the unconditional probability.

In fact, this type of reasoning is explicitly seen in discussions of fine tuning. In that context it is called the anthropic principle, and basically says that the relevant probability for our laws of physics is ##P(laws|intelligence)## precisely because if there were no intelligent observers there would be nobody to calculate the probability of the laws of physics.

So in general discussions of probability the restriction you mention does not exist, and there is no such restriction in the definition of credence. So it seems that this is a custom-built restriction pulled out of nowhere.

Edit: one other problem besides the general non-existence of such a restriction, is that even if such a restriction existed it wouldn’t apply to the SB problem. Here the “awake” condition is shorthand for “awake and being interviewed on Monday or Tuesday as part of the experiment”. The condition is in fact different both before and after the experiment.
 
Last edited:
  • Like
Likes sysprog
  • #38
Dale said:
In fact, this type of reasoning is explicitly seen in discussions of fine tuning. In that context it is called the anthropic principle, and basically says that the relevant probability for our laws of physics is P(laws|intelligence) precisely because if there were no intelligent observers there would be nobody to calculate the probability of the laws of physics.
The anthropic principle is exactly what I think confirmes my claim. It would say that whether the coin landed heads or tails P(heads|awake)=1 [Edit: P(awake)=1] so neither is more probable.

Dale said:
So in general discussions of probability the restriction you mention does not exist, and there is no such restriction in the definition of credence. So it seems that this is a custom-built restriction pulled out of nowhere.
https://en.wikipedia.org/wiki/Anthropic_Bias_(book)#Self-sampling_assumption

https://en.wikipedia.org/wiki/Anthropic_principle
According to Jürgen Schmidhuber, the anthropic principle essentially just says that the conditional probability of finding yourself in a universe compatible with your existence is always 1.
As I mentioned in my first post I think Nick Bostrom‘s book Anthropic Bias is a good one to read to understand anthropic reasoning.

https://www.anthropic-principle.com/q=book/chapter_11/#11d
On the other hand, the intuition that Beauty’s credence of Heads should be [1/2] is justified in cases where there is only one run of the experiment and there are no other observer-moments in the awakened Beauty’s reference class than her other possible awakenings in that experiment. For in that case, the awakened Beauty does not get any relevant information from finding that she has been awakened, and she therefore retains the prior credence of 1/2.

Those who feel strongly inclined to answer P(Heads) = 1/2 on Beauty’s behalf even in cases were various outsiders are known to be present are free to take that intuition as a reason for choosing a reference class that places outsiders (as well as Beauty’s own pre- and post-experiment observer-moments) outside the reference class they would use as awakened observer-moments in the experiment. It is, hopefully, superfluous to here reemphasize that such a restriction of one’s reference class also needs to be considered in the broader context of other inferences that one wishes to make from indexical statements or observations about one’s position in the world. For instance, jumping to the extreme view that only subjectively indistinguishable observer-moments get admitted into one’s reference class would be unwise, because it would bar one from deriving observational consequences from Big-World cosmologies.

I don’t think I can explain my opinion any better then I already did. I still didn’t understand any argument for the thirder position and I found the sources that support my view. So I think I should stop discussing this further.
 
Last edited:
  • #39
I don't see how anything reasonable could lead one to conclude that ##P(heads)=P(heads|awake)## for an epistemically-sound respondent ##-## anthropic principle, indexicality, etc. notwithstanding.
 
  • Like
Likes Dale
  • #40
Moes said:
The anthropic principle is exactly what I think confirmes my claim. It would say that whether the coin landed heads or tails P(heads|awake)=1 so neither is more probable.
No, there is no doubt whatsoever that ##P(heads|awake)=1/3##. If you have any doubt whatsoever about that, simply run a Monte Carlo simulation and prove it to yourself. Any claim to the contrary is not an argument, it is simply misinformation.

While you are free to argue that the credence should be equal to ##P(heads)## or that the credence should be calculated in some entirely different manner, there is simply no avoiding the fact that ##P(heads)=1/2## and ##P(heads|awake)=1/3##.
the conditional probability of finding yourself in a universe compatible with your existence is always 1.
So calculating such conditional probabilities is indeed valid, contrary to your argument.
 
Last edited:
  • #41
@Moes, please understand that the the representations ##P(heads)## and ##P(heads|awake)## have strictly defined meanings; that's why @Dale can be so unequivocal about their values. Incidentally, his Monte Carlo simulation shows that in this case, as usual, there's no significant/appreciable distance between the Bayesian and frequentist interpretations of probability.

@Dale, it might be instructive or entertaining to @Moes and to others if you were to post your Monte Carlo code for this simulation. :smile:
 
  • #42
sysprog said:
I don't see how anything reasonable could lead one to conclude that ##P(heads)=P(heads|awake)## for an epistemically-sound respondent ##-## anthropic principle, indexicality, etc. notwithstanding.
Agreed. The argument is about credence. The probabilities are indisputable. They can be directly determined as the long run frequencies in a Monte Carlo simulation of a million Sleeping Beauty experiments. I did that simulation previously and reported the results. I could probably find it, but it would be easier to re-do the simulation.
 
  • #43
Dale said:
Agreed. The argument is about credence. The probabilities are indisputable. They can be directly determined as the long run frequencies in a Monte Carlo simulation of a million Sleeping Beauty experiments.I did that simulation previously and reported the results. I could probably find it, but it would be easier to re-do the simulation.
Oh , you mean like this?
Python:
import numpy as np
 n = input('Number of samples: ')
print np.sum(np.random.rand(n)**2+np.random.rand(n)**2<1)/float(n)*4
(from https://rosettacode.org/wiki/Monte_Carlo_methods#Python ##-## uses Monte Carlo method to calculate the value of ##\pi##)
 
  • Like
Likes Dale
  • #44
sysprog said:
@Dale, it might be instructive or entertaining to @Moes and to others if you were to post your Monte Carlo code for this simulation. :smile:
Sure, this is Mathematica code:

Sleeping Beauty Monte Carlo:
In[1]:= flips = RandomChoice[{heads, tails}, 1000000];

In[2]:= runs =
  Flatten[Table[{{i, mon, awake}, {i, tue,
      If[i === heads, asleep, awake]}}, {i, flips}], 1];

In[3]:= N[heads/(tails + heads) /. Counts[runs[[All, 1]]]]

Out[3]= 0.50053

In[4]:= N[
 heads/(tails + heads) /.
  Counts[Select[runs, (#[[3]] == awake) &][[All, 1]]]]

Out[4]= 0.333805

Line 1 flips a million coins. Line 2 runs the standard Sleeping Beauty experiment for each flip. Line 3 calculates ##P(heads)## and line 4 calculates ##P(heads|awake)##. This is standard frequentist probability.
 
  • Like
Likes sysprog
  • #45
  • #46
Moes said:
It would say that whether the coin landed heads or tails P(heads|awake)=1
Sorry I meant it would say that whether the coin landed heads or tails P(awake)=1

The results of the simulation are obvious.
 
  • #47
Moes said:
The results of the simulation are obvious.
Agreed.

So we are back to the question of why the conditional probability should be forbidden in the calculation of credence. Your justification above seems both invalid in general and inapplicable in the specific case of Sleeping Beauty.

The simple fact is that people’s beliefs are highly conditional. A definition of credence which seeks to restrict that seems obviously wrong.
 
  • #48
Dale said:
Agreed.

So we are back to the question of why the conditional probability should be forbidden in the calculation of credence. Your justification above seems both invalid in general and inapplicable in the specific case of Sleeping Beauty.

The simple fact is that people’s beliefs are highly conditional. A definition of credence which seeks to restrict that seems obviously wrong.
Using the anthropic principle as I was saying it comes out there is a 100% chance that she would be awake whether the coin landed heads or tails. So how do you think this condition of being awake could make tails more probable than heads?
 
  • #49
Moes said:
Using the anthropic principle as I was saying it comes out there is a 100% chance that she would be awake whether the coin landed heads or tails.
That is not correct. I am not sure how you come to that conclusion.

Moes said:
So how do you think this condition of being awake could make tails more probable than heads?
Because the Monte Carlo simulation shows it so. Again, the credences are disputable, but the probabilities are not.

I thought you said the results of the simulation were obvious. Then why do you say things that are obviously wrong?
 
  • #50
Dale said:
Moes said:
Using the anthropic principle as I was saying it comes out there is a 100% chance that she would be awake whether the coin landed heads or tails.
That is not correct. I am not sure how you come to that conclusion.
I think that the intended meaning is that there's an awakening no matter what, and, erroneously, that the anthropic principle is something to which recourse may be had for purpose of negating the consequence of there being either 1 or 2 awakenings for tails, and only 1 for heads.
 
  • #51
Dale said:
That is not correct. I am not sure how you come to that conclusion.
Dale said:
In fact, this type of reasoning is explicitly seen in discussions of fine tuning. In that context it is called the anthropic principle, and basically says that the relevant probability for our laws of physics is P(laws|intelligence) precisely because if there were no intelligent observers there would be nobody to calculate the probability of the laws of physics.
The same way you understand that in the discussions of fine tuning the condition “intelligence“ can be added to figure out the probability of us living in a universe with our laws of physics, likewise in the sleeping beauty problem when figuring out the probability of her being awake given that the coin landed heads you should understand that we should need to add the condition of intelligence.

So it comes out P(awake)= P(awake|intelligence)=P(awake|awake)=1

Therefore,
Moes said:
The anthropic principle is exactly what I think confirmes my claim. It would say that whether the coin landed heads or tails P(awake)=1 so neither is more probable.

I don‘t know how to write a condition on the condition but it should come out that we are looking for P(heads) with the condition that she is awake but only on condition that she is awake which is the same as P(heads) which is 1/2.
 
  • Skeptical
Likes sysprog
  • #52
@Moes, please re-read that post, viewing it as if someone else had written it, and see if it doesn't look like nonsense to you. It's hard to avoid writing nonsense if you're trying to embrace a false idea. And if you don't know how to write conditional probabilities, then please learn how before writing them. Thanks.
 
  • Like
Likes Dale
  • #53
Moes said:
So it comes out P(awake)= P(awake|intelligence)=P(awake|awake)=1
That isn't what it says. So going back to this:
the anthropic principle essentially just says that the conditional probability of finding yourself in a universe compatible with your existence is always 1
So if we denote a universe with laws of physics that are compatible with life as ##NiceLaws## and the fact that I exist as ##IExist## then the anthropic principle is just pointing out that ##P(NiceLaws|IExist)=1##. The point is that it is valid to form conditional probabilities, a statement that you have previously opposed.

For the Sleeping Beauty problem, the equivalent statement, in my opinion, would be that if she is being interviewed and asked about her credence then she is awake. So ##P(awake|interview)=1##.

Your statement that ##P(awake|awake)=1## is true, but it is tautologically true for any proposition, whereas the anthropic principle doesn't apply for all propositions. So although ##P(awake|awake)=1## is true, I would not associate its truth with the anthropic principle in any way.

Furthermore, the claim that ##P(awake)=P(awake|awake)## is simply wrong. The conditional and the unconditional probabilities are not the same, and even if they were they provide no information on ##P(heads|awake)## which is the relevant probability.

Moes said:
I don‘t know how to write a condition on the condition but it should come out that we are looking for P(heads) with the condition that she is awake but only on condition that she is awake which is the same as P(heads) which is 1/2.
There is no such thing as conditions on conditions. There are just multiple conditions. Multiple conditions are typically written as ##P(event|Condition1,Condition2)## or as ##P(event|Condition1 \cap Condition2)##. In general ##P(A|B,B) = P(A|B \cap B) =P(A|B) \ne P(A)##.
 
Last edited:
  • Like
Likes sysprog
  • #54
sysprog said:
I think that saying that SB's contigent awakening is "new information" to her introduces an inclarity. It's not new information with respect to her knowing on Sunday what her answer should or will be if/when she awakens on Monday or Tuesday. For that, she needs only the information that is given to her on Sunday.
What you are objecting to, are elements Elga added to describe his solution to the problem he published. Not the problem as he published it, or the way I tried to address the problem he published. The only inclarity is due to trying to use these elements when they are not present in what I asked.

I said nothing about Sunday, Monday, or Tuesday. That's information Elga added as part of his solution. I said nothing about knowing, before the experiment starts, what you (original problem) or SB (Elga's solution) would answer. The new information I described is not "relative to" what you know before being put to sleep[1], it is about comparing the current state to the state you know, right now, was used to decide if a waking occurs.

When you (not SB) are awake in the experiment I described:
  1. You know that a decision was made, while you were asleep at time T0, about whether to wake you.
  2. You know that the state of a dime and a quarter, at time T0, was well-described by the sample space {(H,H), (T,H), (H,T), (T,T)} with probability distribution {1/4,1/4,1/4,1/4}.
  3. You know that the decision was made to wake you. That would not have happened if the actual state, at time T0, had been (H,H).
  4. So you know that the sample space that describes the state of the coins now, at time T1>T0, is {(T,H), (H,T), (T,T)}.
This information, about the difference in the probabilistic states at time T0 and T1, is new information. Since only the state (H,H) was affected, you can update the probability distribution to {1/3,1/3,1/3}. Since the only remaining state where the Quarter is currently showing Heads is (T,H), your degree of belief that the Quarter is showing Heads should be 1/3.

+++++

[1] The same is true in Elga's solution, which is where halfers go wrong. The new information is about what SB knows about the coin at the moment she answers the question, compared to what she knows was true when it was flipped. This is AFTER she was put to sleep. Elga's introduction of days apparently muddles that issue for some. That's why I used two coins, and asked about the current state compared to the state when the decision was made to awaken you.
 
Last edited:
  • #55
I agree with you regarding the correctness of the 1/3 conclusion.
 
  • #56
Dale said:
So if we denote a universe with laws of physics that are compatible with life as NiceLaws and the fact that I exist as IExist then the anthropic principle is just pointing out that P(NiceLaws|IExist)=1. The point is that it is valid to form conditional probabilities, a statement that you have previously opposed.
You are missing the point of the anthropic principle. We don’t need the anthropic principle to tell us that it is valid to form conditional probabilities. That is obvious. The point of the anthropic principle is to explain fine tuning. I don’t see how your explaining how we just happened to be in a universe that’s fine tuned. Part of the problem is why does this condition “IExist” actually exists despite the fact that it was so improbable.
Dale said:
Your statement that P(awake|awake)=1 is true, but it is tautologically true for any proposition, whereas the anthropic principle doesn't apply for all propositions. So although P(awake|awake)=1 is true, I would not associate its truth with the anthropic principle in any way.

Furthermore, the claim that P(awake)=P(awake|awake) is simply wrong. The conditional and the unconditional probabilities are not the same, and even if they were they provide no information on P(heads|awake) which is the relevant probability.
What I was trying to say is that when she wants to figure out the probability of her being awake she needs to account for the precondition that she must be awake to be trying to figure out the probability. So she needs to add a condition to P(awake) so that the probability that she is awake is P(awake|awake). If you understood the anthropic principle you should understand why it applies here.
Dale said:
There is no such thing as conditions on conditions
I’m not sure what that means. It definitely makes sense to ask what the probability of A is conditioned with the fact that B is true but B is only true if B is true. It might be pointless since it’s the same as asking what the probability of A is but but the statement makes sense.
sysprog said:
@Moes, please re-read that post, viewing it as if someone else had written it, and see if it doesn't look like nonsense to you. It's hard to avoid writing nonsense if you're trying to embrace a false idea. And if you don't know how to write conditional probabilities, then please learn how before writing them. Thanks.
I guess I just don’t know how to write and explain things well. But I fully understood Nick Bostrom’s argument for the halfer position which I think is exactly the way I understand it. If you are interested in understanding it maybe try reading his book.
 
  • #57
Moes said:
We don’t need the anthropic principle to tell us that it is valid to form conditional probabilities. That is obvious.
Then I don’t understand your previous statement:
Moes said:
Ok, one way I think I could explain it, is that you can only let your belief depend on the conditions if the conditions could have been different.
The point of the anthropic principle is that conditional probabilities are indeed valid, even when such conditions could not be any other way, in contradiction to your earlier claim.

Moes said:
I’m not sure what that means. It definitely makes sense to ask what the probability of A is conditioned with the fact that B is true but B is only true if B is true. It might be pointless since it’s the same as asking what the probability of A is but but the statement makes sense.
No, the statement doesn’t make any sense. If you are indeed making a valid point here then you will need to find a statistical (not philosophical) reference that explains what you are trying to say. I have never heard of conditions on conditions, just multiple conditions.

Moes said:
So she needs to add a condition to P(awake) so that the probability that she is awake is P(awake|awake).
##P(awake|awake)=1## has no bearing on ##P(heads|awake)=1/3\ne P(heads)##
 
Last edited:
  • Like
Likes sysprog
  • #58
Dale said:
I have never heard of conditions on conditions, just multiple conditions.
##(p\Rightarrow(a\Rightarrow b))\iff((p\wedge a) \Rightarrow b)##
 
  • #59
Maybe it would help if we drop the weird thing about being awake and having no memory.

Once a week someone flips a coin, and if it's heads they turn a light on and leave it on for one day, then they go back and turn it off, and if it's tails they turn a light on for two days they go back and turn it off. You're aware of this, but you don't remember which day of the week they flip the coin on. You walk into the room one day and see the light is on. What is the probability the coin flip was tails?
 
  • Like
Likes sysprog
  • #60
Very nice restatement. If that doesn't do it perhaps we should stop beating this dead horse.
I am amused that the re-opened thread essentially recapitulated the initial thread.
 
  • #61
Office_Shredder said:
Once a week someone flips a coin, and if it's heads they turn a light on and leave it on for one day, then they go back and turn it off, and if it's tails they turn a light on for two days they go back and turn it off. You're aware of this, but you don't remember which day of the week they flip the coin on. You walk into the room one day and see the light is on. What is the probability the coin flip was tails?
Yes, this is a simple example where the probability is obviously 2/3. This was never questionable. This would be like a random person walking into a room where he knows the sleeping beauty experiment is taking place, but doesn’t have any other information. If he sees sleeping beauty awake he should think the probability that the coin landed tails is 2/3.

Office_Shredder said:
Maybe it would help if we drop the weird thing about being awake and having no memory
The loss of memory is the key point in this problem.

This is exactly what I think the problem is. People think the sleeping beauty problem is just another mathematical probability question. They don’t realize that the loss of memory adds philosophical type of questions to the problem. I guess mathematicians are just not the right type of people to ask about these questions. But I’m surprised how anyone can really believe the answer is not 1/2. Just thinking about myself in sleeping beauty’s situation the answer seems obvious to me.
 
  • #62
I guess mathematicians are just not the right type of people to ask about these questions.
You don't have to be a mathematician to solve this, but it doesn't hurt.
 
  • #63
Moes said:
I’m surprised how anyone can really believe the answer is not 1/2.
How can you really believe that it's not 1/3, when in this thread you have shown repeatedly that you know that 1/3 is the correct answer?
Moes said:
Just thinking about myself in sleeping beauty’s situation the answer seems obvious to me.
If you pretend that SB is asked what's the probability for a random coin toss, then the 1/2 answer is obvious, but that's not the postulated situation, and you've shown that you know that, and that you know that the perhaps less obvious correct answer in the scenario as described is 1/3.
 
Last edited:
  • #64
Moes said:
But I’m surprised how anyone can really believe the answer is not 1/2.
That sounds like something you should work on. You may not agree with an answer or an argument, but you should be able to get out of your own head and understand other people enough to see how they can believe something you don’t.

In this case the argument is exceptionally simple: I believe that the credence of a person is measured by the bet they would take as described in the blog I linked to, 1/3 is the break-even for that specific bet, and since she is rational and rational people don’t want to lose money that is the bet she would take, and hence her credence.

If you cannot understand my belief then you aren’t even trying to do so. You even agree on everything except the definition of credence. But when someone clearly explains what they mean by a word and then use that word exactly as they have explained then any lack of understanding is on the other part.

Again, you don’t have to agree with the thirder position, but at this point if you fail to understand it that is on you.
 
  • #65
Moes said:
This is exactly what I think the problem is. People think the sleeping beauty problem is just another mathematical probability question.
But at the point it is not " just another mathematical probability question" it is no longer subject to rigorous analysis and you can torture it to produce any desired answer.
How many angels can dance on the head of a pin?
 
  • #66
hutchphd said:
But at the point it is not " just another mathematical probability question" it is no longer subject to rigorous analysis and you can torture it to produce any desired answer.
How many angels can dance on the head of a pin?
And since the answer in the end is a number then math must enter in at some point.
 
  • #67
Moes said:
But I’m surprised how anyone can really believe the answer is not 1/2. Just thinking about myself in sleeping beauty’s situation the answer seems obvious to me.
This opinion is based on the misconception that SB "receives no (new) information" by being wakened. Yet I have seen no definition of what halfers who say this think "new information" means, even from the ones who keep demanding that I supply one.

They have said that because she knew she would be wakened all along, nothing she "learns" from being awake tells her anything she didn't know on Sunday Night. But they haven't defined how this distinction means anything to probability theory. And it is easy to remedy: In Elga's reformulation (it isn't the original problem, which is another thing nobody wants to recognize), wake her both days. On Monday, and on Tuesday if the result is Tails, interview her with the question about confidence. But on Tuesday, if the result is Heads, take her to Disneyworld without asking her anything about the coin.

This way, if she is interviewed, she knows that there are four possible states under which she could be awake. She also knows that each had the same probability of being the current state when she woke up, and that one is now ruled out by the fact that she is not at Disneyworld.

So now her "awake" knowledge includes something that was not certain when she went to sleep on Sunday. The answer, when she is interviewed, has to be 1/3.

The point here is that "new information" includes anything she can learn about the current state, including what she knows it isn't. It does not matter what would happen in the state she knows does not correspond to the current state, just that she knows it isn't the current state. Being awake and interviewed supplies this information.

+++++

Edit: And to make it more intuitive, use the common trick that halfers think makes their answer more intuitive. Wake her every day for a year. On Jan 1, and on every day after that if the coin result was Tails, ask her for her credence about the coin. But on those other 364 days, if the coin result is Heads, take her on a different outing and don't ask her about the coin.
 
Last edited:
  • #68
JeffJo said:
Yet I have seen no definition of what halfers who say this think "new information" means,
I have also not seen a definition about “credence” that leads to the halfer position. I have seen the definition that credence is a degree of belief, but that definition doesn’t lead to any number, let alone specifically to 1/2. And the reference in the OP didn’t provide a definition, it just objected to using a betting scheme in either the definition or measurement of credence.

While I understand the halfer argument, I think it is not well founded. With the key issue being an imprecise definition of credence and a second issue being not using the conditional probability.
 
  • Like
Likes JeffJo
  • #69
Dale said:
I have also not seen a definition about “credence” that leads to the halfer position. I have seen the definition that credence is a degree of belief, but that definition doesn’t lead to any number, let alone specifically to 1/2. And the reference in the OP didn’t provide a definition, it just objected to using a betting scheme in either the definition or measurement of credence.

While I understand the halfer argument, I think it is not well founded. With the key issue being an imprecise definition of credence and a second issue being not using the conditional probability.
It is my opinion that halfers want "credence" to have a different meaning than "probability," so that they can dismiss arguments that use probability theory.
 
  • Like
Likes Dale
  • #70
JeffJo said:
It is my opinion that halfers want "credence" to have a different meaning than "probability," so that they can dismiss arguments that use probability theory.
As long as they are clear, that is fine. Such definitions can indeed make 1/3 not the answer, but it becomes difficult to get 1/2 or any other number also. With such a definition then how do you get any number? Then if it isn’t a probability then how do you get a number between 0 and 1? And then how do you specifically get 1/2? So often I think that such definitions don’t, in fact, support the 1/2 argument. Specifically the "level of belief" definition, by itself, is insufficient to get any number, including 1/2, IMO.
 
  • Like
Likes sysprog
Back
Top