One 50% bet is worse than fifty 1% bets?

  • Thread starter iDimension
  • Start date
In summary, the conversation discusses two options for betting in a game with a maximum capacity of £1000 and a starting amount of £500. Option one is to bet the full £500, giving a 50% chance to win £500. Option two is to spread the bets into fifty £10 bets, giving a 1% chance to win but with fifty tries. The speaker favors the second option, as it offers more chances to win and a minimum take home of £500. However, many people prefer the first option due to impatience and a desire for big, early wins. The conversation also delves into the psychology of betting and the importance of managing odds and potential losses. Overall, the speaker believes that betting the
  • #36
iDimension said:
What do you mean expected net? Do you mean that when it all balances out over a large number of games, the money you spent might be $20million and the amount you win is $10million and the amount you lose is $10million meaning you've made 0% profit?
By expected net, I mean the "expected value" of your net winnings -- (i.e. winnings minus losses). "Expected value" is a term used in probability and statistics when you have a probability distribution for a set of numeric outcomes. You multiply the probability of each outcome by the numeric value of that outcome and add up those products. The total is the expected value. It is also called the mean of the distribution.

Yes, over a sufficiently large number of bets, it is overwhelmingly likely that the total earnings will be approximately equal to the expected value per bet multiplied by the number of bets. This is called the "law of large numbers".
 
Mathematics news on Phys.org
  • #37
iDimension said:
OK so just one last question.

If 50% bets are better than 1% bets. Is 90% bets better than 50% bets?

Betting 90% ($900) to win $100 each time.

You have to define what "better" means.
 
  • #38
Hornbein said:
You have to define what "better" means to you.
 
  • #39
iDimension said:
If 50% bets are better than 1% bets.
It is not, and I'm getting tired of repeating this.
 
  • #40
I'm confused about the initial set-up, and I think other people may be too. The one bet of $500, paying $500 50% of the time is clear enough. Net expectation $250, right?
It's the 50 small bets of $10. What are the odds there? 100:1 or 1:1? I think people are confusing the fraction of the pot that you're betting, with the odds on the small bets, which aren't stated explicitly. Either case is easy to analyse:
If the $10 bet is 100:1 against, each bet has a net expectation of 10 cents. Fifty of these makes a grand total expectation of $5.00. Not so good.
If the $10 bets has 1:1 odds, paying $10, the expectation of each bet is $5. Fifty of these makes your net expectation $250 - same as with the single big bet, as it should be with 50:50 bets. BUT WITH THIS DIFFERENCE: with the 50 small bets you are virtually guaranteed of getting a significant amount of money! $250 on the average, but the chance of getting $0 or $500 is about a quadrillion to one against. So you can take a guaranteed (almost) win of a few hundred, or take a 50-50 chance of $500. This will depend on your own situation. Me, I'd take the found money.
 
  • #41
Based on what other people have said, it looks like your expected value numbers are wrong.

If you put $500 into the pot, you have a 50% of winning $500 and a 50% chance of losing $500 giving an expected value of $0.00, not $250.00

If you put $10 into the pot, you have a 1% chance of winning $990 and a 99% chance of losing $10. Again, you can multiply this out and get an expected value of $0.00

In the long run, neither strategy is dominant. Both strategies lead to what everybody was thinking from the beginning- you end up with however much money you started with.

Your chance of winning is equal to x/1000 where x is how much money you put in.

How much money you win is equal to 1000 - x

So positive expected value is always (x/1000) * (1000 - x) which is (x - x^2/1000)

But you also have a chance to lose all the money you put in the pot. This is equal to (1 - x)/1000 and you lose x. This is expected value of (-x + x^2)/1000

Total expected value is the two added together: (x - x^2/1000) + (-x + x^2)/1000

No matter how much money you put into the pot and no matter how many bets you make, the expected value is always 0.

Here is a way to help understand this.

Imagine there are 10 people with $100. They put all their money into a pot and one guy leaves with $1000

The same 10 people decide to put another $100 into the pot. Another person leaves with $1000. People have spent $2000 and $2000 has left the pot.

If this process continues forever, it's obvious that however much money goes into the pot is how much money leaves the pot.

Imagine that this process is repeated 10 times. All 10 people have won the pot once and have $1000. But they realize that that is exactly how much money all of them have put into the pot- 10 bets of $100 10 times, or $10,000.

There is no way anybody comes out of this system with any sort of advantage, and there is no way more money will ever leave the pot than what went into the pot.

In order to believe that there is a dominant strategy, you have to believe that money comes from nowhere. The pot does not make any money, it only redistributes the money that people put in in an even manner.
 
  • #42
xman720 said:
...

Yeah xman I agree with this and I can see why this is now the case. But what other people have said here is that with the 1% bets, your chances of losing is greater than your chances of winning. Thus if you played a large number of games, your wins would be less than your loses which is clearly worse than 50/50 bets.

I don't know why some people are struggling to understand what I'm saying, maybe I'm not great at explaining things. The key factor here is we don't have infinite money. You have just $500... when it's gone it's gone that's it.

Let's just use a $100 pot.

Method One: You put your full $50 into the pot giving you 1(50%) chance to win the other $50. Whether you win or lose you only ever play one game.

Method Two: You split your $50 into $1 bets.
Game 1: You bet $1 giving you a 1% chance to win the $100 pot. If you win you walk away $149 and never play again. If you lose, you play again.
Game 2: You bet another $1 giving you a 1% chance to win $99. If you win you walk away with $148 and never play again. If you lose, you play again.
...
...
...
Game 50: You finally win with your last dollar. You walk away with the $100 pot bringing you to $100 which is the same amount that Method One could have won but you could have won more money with method two making it the "better" option.

But then I see people here saying that the 1% bets actually give you a 60% chance to lose and a 40% chance to win which is why I thought that the 1(50%) bet is the "better" option
 
Last edited:
  • #43
Apologies, but I don't understand the original question. (I have taught probability at Stanford, so the theory is no problem.)

What is the role of the "other people" who "make up the other £500 ?

And where it says "The second option is to spread your bets, and instead you decide to make fifty £10 bets giving you a 1% chance to win, but you get fifty tries," the fifty £10 bets give you a 1% chance to win what, exactly?

And are you saying the 50 bets each give you a 1% chance of winning [whatever], or do they give you a 1% chance of winning [whatever] when taken together?


Is there any chance that someone can express the question clearly and unambiguously? In my fairly extensive experience, probability questions are often asked unclearly and ambiguously, and that is the main thing that makes them hard.
 
  • #44
iDimension said:
Game 50: You finally win with your last dollar. You walk away with the $100 pot bringing you to $100 which is the same amount that Method One could have won but you could have won more money with method two making it the "better" option.

But then I see people here saying that the 1% bets actually give you a 60% chance to lose and a 40% chance to win which is why I thought that the 1(50%) bet is the "better" option
Okay, one last time, and I think this thread went in circles for way too long so I'm out after this post: those two effects exactly cancel each other. Both options have exactly the same expectation value of zero.
 
  • #45
I think I can help clear up the confusion. The game is defined as follows:

On each play the gambler can choose to make exactly 1 bet from a choice of 2:
  • Bet50: pay $50 for a 50% chance of a payout of $100
  • Bet1: pay £1 for a 1% chance of a payout of $100
The Expected Values for each bet are easily calculated as zero (remembering to subtract the cost of the bet):
  • E(Bet50) = $100 x 0.5 - $50 = $50 - $50 = $0
  • E(Bet1) = $100 x 0.01 - $1 = $1 - $1 = $0
The outcome of Method One is clear: you have a 50% chance of walking away with $100 and a 50% chance of walking away with nothing.

The outcome of Method Two needs a bit of work:
  • 1% chance of winning the first play and walking away with $149
  • 99% x 1% chance of losing the first play, winning the second and walking away with $148
  • 99% x 99% x 1% chance of losing the first two plays...
  • ...
  • 99%49 x 1% chance of losing 49 plays, winning the 50th and walking away with $100
  • 99%50 ≈ 60.5% chance of losing 50 plays and walking away with nothing
So although there is a chance with Method Two to walk away with up to $49 more than with Method One, you are more likely to walk away with nothing.

There is also Method Three where you play 50 times whether you win or not. The chances of losing 50 times and walking away with nothing are again approximately 60.5%, but now you have a chance to win even more - you could even walk away with $5,000! (compare the probability of this with the number of sub-atomic particles in the observable universe).
 
  • Like
Likes iDimension
  • #46
MrAnchovy said:
The outcome of Method One is clear: you have a 50% chance of walking away with $100 and a 50% chance of walking away with nothing.
  • 99%50 ≈ 60.5% chance of losing 50 plays and walking away with nothing
So although there is a chance with Method Two to walk away with up to $49 more than with Method One, you are more likely to walk away with nothing.

Method One: Probability to win 0 = 50%
Method Two: Probability to win 0 = 60.5%

So betting once for 50% is the better option. Which is basically what I've been asking this entire thread. Apologies if my explanation of the game was terrible.

Thanks to all who posted and had the patience to help me lol.
 
  • #47
iDimension said:
Method One: Probability to win 0 = 50%
Method Two: Probability to win 0 = 60.5%

So betting once for 50% is the better option.
The probability to lose everything is not the only possible measure of success. If you measure success by expected value, all options are equally poor.
 
  • #48
iDimension said:
So betting once for 50% is the better option.
I didn't say that. Consider this:

You believe that Method One is the better option, but there is also another gambler Two who chooses Method Two. After playing your games you compare notes.
  • There is a 30.25% chance you both end up with $0
  • There is a 0.31% chance you both end up with exactly $100
  • There is a 30.25% chance you end up with $100 and Two leaves with nothing
  • There is a 19.60% chance that you end up with $0 and Two ends up with more than $100
  • There is a 19.60% chance that you end up with $100 and Two ends up with more than $100
So the approximate chances are 31% that you both walk away with the same, 30% you walk away with more than Two and 39% the player of Method Two walks away with more than the player of Method One. Which do you think is the best method now?
 
  • #49
"What kind of crazy maths is this?" you might ask, "I ask a simple question and get three different answers!"

This is the maths of decision theory. It doesn't have a lot of significance for the natural sciences, but in the field of economics it is very important. Key to decision theory is the concept of a utility function: this is a rigorous way of evaluating different outcomes to decide what is "best". The reason we have three different answers in the game posed by the OP is that we have used three different utility functions:
  • maximising the expected value of the winnings (note that this is rarely used as a utility function because expected values depend on a large number of trials, in decision theory we generally only get to make a decision once) which is $0 for both Methods and so they are equal
  • maximising the probability of (winnings > $0) which is higher for Method One
  • minimising the probability of (regret > $0) which is higher for Method Two (regret is an important concept within decision theory)
 
  • #50
I still think the question is a bit vague.

Bet #1. $500. Odds 50%. Win pays $500 (plus the original $500, net $1000)
Bet #2 $10. Odds 1%. Win pays $990 (plus the original $10, net $1000)

The choice is between making 1 of Bet #1 or 50 of Bet #2? For convenience assume they are not 50 sequential bets. Say you can buy 1 of type Bet #1 or 50 of type Bet #2. There is no time difference or difficulty difference ... it is purely a matter of preference.

I see 50 of Bet #2 as better. You have a chance of winning 50-times. The downside is the same: you lose all your money. The upside on 50 bets is higher.

A lot of people point out that the game of chance proposed is unrealistic. I agree. In a casino, you always have worse odds than the payout. But if it was just a matter of putting the same $500 down, either into a box marked Bet#1, or a box marked Bet#2, and then getting the outcome, I would choose the box marked Bet#2.

I've only casino gambled once, and my goal was to play the game with the fairest odds (craps), and to make the smallest bets. Doing that, some people lose and some people win. I won a small amount. But the house ALWAYS wins in the long run.

Perhaps I misunderstand the problem, but the time element seemed to be a side issue that should be subtracted out. I thought the problem was not how to sequentially gamble, but whether to take 50 betting positions vs 1 betting position, with the odds and payout as described. I think the 50 betting positions is superior.
 
  • #51
Taking the game of roulette as an example, should one bet a large amount on black or a small about on 23, many times? The argument goes that one will miss 23 60% of the time, while 40% of the time one will hit 23 one or more times.

I think one loses 2.5% to the house in roulette, so you would lose something like 63% of the time with that strategy, almost 2/3rds. But when you won, you would win a lot back, thanks to multiple hits on some occasions, and the average over time would be the -2.5%.

Betting on black, you'd win ~49% of the time (because it lands on 0 sometimes).
 
  • #52
verty said:
Taking the game of roulette as an example, should one bet a large amount on black or a small about on 23, many times? The argument goes that one will miss 23 60% of the time, while 40% of the time one will hit 23 one or more times.

I think one loses 2.5% to the house in roulette, so you would lose something like 63% of the time with that strategy, almost 2/3rds. But when you won, you would win a lot back, thanks to multiple hits on some occasions, and the average over time would be the -2.5%.

Betting on black, you'd win ~49% of the time (because it lands on 0 sometimes).
Using your roulette example, Say the numbers 1-50 are black and the numbers 51-100 are red. I was unsure if the problem as being phrased whether it was better to bet black/red, or to take 50 individual numbers (with one spin). There is no difference there. If there were 51 tables and you could bet the number 1 on 50 tables, or black/red on the 51st, I would take the 50 bets on #1.

My reasoning is that the odds of a single win SEEM the same, and then there is a slight chance of hitting twice. Or even the #1 coming up on all 50 tables.

When the odds are slightly against you, the best strategy is to bet small amounts and to have a loss limit. Anyone can have a losing streak or a winning streak. Just don't let a losing streak take you to the poorhouse.
 
  • #53
votingmachine said:
When the odds are slightly against you, the best strategy is to bet small amounts and to have a loss limit. Anyone can have a losing streak or a winning streak. Just don't let a losing streak take you to the poorhouse.
If the odds are against you then a strategy of making a single minimum bet and then walking away is indeed near optimal. However, a strategy that reverses the order of those two steps is even better.
 
  • Like
Likes mfb
  • #54
jbriggs444 said:
If the odds are against you then a strategy of making a single minimum bet and then walking away is indeed near optimal. However, a strategy that reverses the order of those two steps is even better.
Again, that is an all-or-nothing strategy. If you play a game with odds near to a coin flip, you are better off making many small bets. The odds of an outcome far removed from the middle gets small.

This is diverging into secondary motivations. If you enjoy gambling, then gamble. If you don't, then don't. I agree that gambling is not all that fun. I've been in Las Vegas for a conference, and I played a small amount of craps. I would not go out of my way to gamble. If you are in Vegas, the best return is to play very small bets, and get a few free drinks (if you like what they have ... I got a free diet coke, and a free coffee). If you don't hit a losing streak, but have mixed results, you walk away about even. And if you like that, then there you go.

I was trying to separate the secondary from the original question, by phrasing it as $500 that goes into a single betting set-up, and the results take the same amount of time. That removes he questions of whether you should just walk away with the money you have in your wallet intact ... which is what I would do with $500. But if you are obliged to choose, then pick the best option. I think it is the 50 bets.

In a REAL casino, there is no betting strategy that assures you of a win. If your loss limit is $500, and you enjoy the process, then spread the bets out.

But in the synthetic question, there is no enjoyment difference to consider. Just the choice.

In a real casino, you can make a different choice also. Instead of 50 bets at 1%, you can make 50 bets at 49%. The expected outcome for that is a small loss. In a real casino, if you have a $500 loss limit, make many small bets (at the house minimum), and hope for a winning streak. But again, that was not the question, or the assumptions.
 
  • #55
votingmachine said:
Again, that is an all-or-nothing strategy. If you play a game with odds near to a coin flip, you are better off making many small bets. The odds of an outcome far removed from the middle gets small.
A single minimum bet nearly minimizes variance. Playing additional games increases variance. The variance of a sum is the sum of the variances. You minimize that by minimizing the number of [minimum] bets. Betting zero times is optimal if your goal is to minimize variance.
 
  • #56
There is a large amount of mathematics on this problem. Here is a nice reference which shows that the basic problem still remains open. There are some nice theorems saying that bold play is optimal but they don't cover all interesting cases. https://www.researchgate.net/publication/266704722_The_re-opening_of_Dubins_and_Savage_casino_in_the_era_of_diversification
 
Last edited by a moderator:
  • #57
with an expected value >0 the Kelly Criterion describes the optimal strategy, on each bet where the payoff is 1-1, the proportion of your capital that should be wagered is 2p-1, where 0.50<p<1 is the odds of winning

https://en.wikipedia.org/wiki/Kelly_criterion
 
  • #58
Why are we introducing more variables like psychology of the better, casino environment, roulette etc?

None of this exists in my game, it's purely numbers. The goal of the game is to maximise your chances of NOT walking away with $0 after your $500 has been bet so the question is simple. Which option gives the better chances?

1 bet of $500
50 games of $10 (played individually, stopping after you win once)
 
  • #59
iDimension said:
Why are we introducing more variables like psychology of the better, casino environment, roulette etc?

None of this exists in my game, it's purely numbers. The goal of the game is to maximise your chances of NOT walking away with $0 after your $500 has been bet so the question is simple. Which option gives the better chances?

1 bet of $500
50 games of $10 (played individually, stopping after you win once)

Suppose it was a claw game, 1 in 2 or 1 in 100. The 2nd strategy would be better because you would save some money if you win the prize early.
 
  • #60
verty said:
Suppose it was a claw game, 1 in 2 or 1 in 100. The 2nd strategy would be better because you would save some money if you win the prize early.
Note that this time around the goal has been clearly stated: maximize the probability of not losing everything. With that spelled out, the first strategy is clearly superior. The result is probability 0.5 of not losing everything versus a probability of 0.395 with the second method.
 
  • #61
jbriggs444 said:
Note that this time around the goal has been clearly stated: maximize the probability of not losing everything. With that spelled out, the first strategy is clearly superior. The result is probability 0.5 of not losing everything versus a probability of 0.395 with the second method.
That is a difference. I wanted to eliminate the non-numerical factors by phrasing it as a single bet both times, just a distributed bet, vs a single bet. If the goal is to not lose, then bet 1 bet at 50%. Loss avoidance favors that.

(0.5^1)<(0.99^50)The scenario is now phrased as sequential single bets, stopping after one-win, or losing the total. That rules out a scenario I had ruled out, where the bets are simultaneously placed on the same outcome ... the roulette wheel with 100 numbers and you bet 50 of them, which is the same odds as taking the red-black split. Those 50 bets are not independent events.

The most favorable outcome could still be argued (loss avoidance can not). Because the winning scenario with the sequential bets has an outcome of the $500 MINUS the number of bets PLUS $990. The other has the outcome of $1000.
 
  • #62
That's silly though. If the goal is to walk away with something, just bet nothing and walk away with $500 guaranteed.
 
  • #63
xman720 said:
That's silly though. If the goal is to walk away with something, just bet nothing and walk away with $500 guaranteed.
I agree. That is why the scenario's and goals need careful definition. It is better to not put large amounts of money into games of chance. Any selection of the options in this thread should have the warning that it is a selection among artificially limited, bad choices.
 
  • #64
The best option is to not gamble sure, but I don't think I need to point this out for the 8th time, this is just a thought question and not actually a real life scenario which is why I keep telling people to ignore human state of mind, roulette, casino odds etc. It's PURELY about the numbers.
 
  • #65
iDimension said:
The best option is to not gamble sure ...
What makes you say that? The expected outcome of not gambling is $50 in your wallet. The expected value of betting it all on a 50-50 $100 payoff is $50 in your wallet. The expected outcome of making $1 bets for a 1-99 $100 payoff until you either win once or your wallet is empty is $50 in your wallet. The expected outcome of making 50 of those $1 bets, win or lose, is $50 in your wallet.

All four scenarios have the same expected outcome. So what is it that now makes you say the best option is to not gamble? (This is a rhetorical question. The answer follows the next quote.)

... but I don't think I need to point this out for the 8th time, this is just a thought question and not actually a real life scenario which is why I keep telling people to ignore human state of mind, roulette, casino odds etc. It's PURELY about the numbers.
Numbers don't distinguish between the four scenarios in this hypothetical problem; each has the same expected outcome. Human state of mind is the key thing that distinguishes between these four scenarios. Someone who is extremely risk averse will just walk away and not bet at all. A moderate risk taker might instead go for the single 50-50 bet. An even more aggressive risk taker might well go for the 50 1-99 bets.

Your example is a zero sum game. There's nothing that makes the gamble worthwhile. Let's make the gamble worthwhile. Suppose you've saved up half of a million dollars. Unless you can live on $20,000 per year, you can't retire on that. Suppose you can set it aside for a decade. One investment advisor gives you a rock solid guarantee that that half of a million will double in that decade. Another advisor offers you a 50-50 gamble. Your half million has a 50% chance of becoming three million in a decade, but it also has a 50% of dwindling to nothing in that time. Note that the expected utility of this gamble is considerably more than the rock solid guarantee. Which will you choose?
 
  • Like
Likes pbuk
  • #66
D H said:
Numbers don't distinguish between the four scenarios in this hypothetical problem;
I disagree: you can only say that the Expected Value doesn't distinguish between the four scenarios in this hypothetical problem.

D H said:
Human state of mind is the key thing that distinguishes between these four scenarios. Someone who is extremely risk averse will just walk away and not bet at all. A moderate risk taker might instead go for the single 50-50 bet. An even more aggressive risk taker might well go for the 50 1-99 bets.
I disagree again: we can replace the human state of mind with a utility function that provides numbers to distinguish between these four scenarios.

D H said:
Your example is a zero sum game. There's nothing that makes the gamble worthwhile. Let's make the gamble worthwhile. Suppose you've saved up half of a million dollars. Unless you can live on $20,000 per year, you can't retire on that. Suppose you can set it aside for a decade. One investment advisor gives you a rock solid guarantee that that half of a million will double in that decade. Another advisor offers you a 50-50 gamble. Your half million has a 50% chance of becoming three million in a decade, but it also has a 50% of dwindling to nothing in that time. Note that the expected utility of this gamble is considerably more than the rock solid guarantee. Which will you choose?
No, the expected value of the gamble is considerably more than the rock solid guarantee. You can't say anything about the expected value of the utility function until you have defined that utility function: for many people the prospect of destitution in retirement is unacceptable and so this carries a big negative penalty, however if they have at least $20,000 in pension they could work part time for a few years to top this up to an acceptable level. This might be represented by a utility function something like this.

The thread could be moved to the Statistics and Probability forum, and it could also be locked.
 
  • Like
Likes jbriggs444
  • #67
MrAnchovy said:
I disagree again: we can replace the human state of mind with a utility function that provides numbers to distinguish between these four scenarios.

But that would make the theory useless. I'll give an example.

Suppose you have a classroom and you want to evaluate teaching methods. So you develop a theory about two styles of teaching and wonder which is better. Now if your theory says that the more visual method appeals to more visual learners and the more auditory method appeals to more auditory learners, and the success of either method will depend on the share of visual and auditory learners, it hasn't given you a prediction. You now need to input how many students are visual learners and how many are auditory learners. And you will get answer like this. "If 40% benefit more from visual learning and 60% benefit more from auditory learning, 40% will prefer the visual method and 60% will prefer the auditory method, and the expected value is 40% times the chosen method's effectiveness for visually preferred learners and 60% of the effectiveness of the chosen method for auditory-preferred learners. Do that calculation and you'll know which method is better." It's not a prediction. A prediction says "method X will be better". And if the theory must wait on psychological profiling of the agent, it isn't a useful theory.

It's like asking, what is the chance this person is lying, and getting the answer, if they have lied in the past, if they dislike you, if they are maniacal, if they distrust you, they are probably lying. But if they don't have those feelings or traits, they are probably telling the truth. This is not a prediction. And if you try to model all of that, you still have to feed in all that information, it's not predicting anything.

And it also makes a grave mistake. You may start to think that learners are either visual or auditory, or that people are either liars or truth tellers. But of course it may come down to the teacher which method is better. It may be that a good teacher can teach either method and a bad teacher can teach neither. It may be that pupils choose the auditory method on the survey because they think it'll mean easier homework, and pupils choose the visual method because they think it'll mean exams are easier to study for. Who knows, right? There is so much variation there, the theory becomes useless at predicting anything.

Yes, it gives you a number that looks good but that number is meaningless and useless, it means nothing.

And in this case, the answer that the gamble, if Poisson, would mean leaving empty-handed 60% of the time, is useless. One doesn't know if it is Poisson because that is a real concern and the whole idea was to not have real concerns be involved. One has to say, "what if the scenario is Roulette, what if the scenario is a claw game?" And then the answer is, take multibets in a claw game and single bets in Roulette. But to me that is not an answer when one has decided not to deal with real concerns. And anyway, it is only useful when one has a real concern in mind and then the question should have been, "How can I maximise my odds at Roulette?", "How can I beat this claw game?", etc. The question as posed is unanswerable and it is wrong to suppose that one can simply add a utility function and have a useful theory. If one has to add a utility function, that means the theory can't predict anything and is splintering into many pieces that each predict one tiny thing, but together they are simply a questionaire awaiting the real concerns to be supplied.

And, the view that if one plugs in the real concerns, the theory will model the real world, is called Aristotleanism. It says the world is black and white, yes or no, there are no mixtures, no shades. But we know that's not true. Is an apple really red? Is a wave in the ocean really a sine wave or a Fourier composite of sinusoids? Isn't that just an excuse to say it is unpredictable? We still predict the height of the waves when there is an earthquake but is there any theory that predicts how many waves will arrive in an hour and the spacing of them? I think not. And for similar reasons, I reject the height prediction as well. The real world is never going to match the model, the world is not Aristotelian.

When it comes to gambling, there is nothing predictable about it. The whole point is to be unpredictable. If there is ever a game that becomes predictable, it'll be changed or updated so that it isn't anymore. There's a kind of uncertainty principle: as soon as you can predict, you can predict that you will lose.

There is market research that looks at number of houses, buyers in a vicinity, competition, etc, that can produce a number of dollars of potential spend on a product type. But I class that as something different, I don't think of it as the probability of making X dollars selling this product. And I would argue that any theory like that is useless.
 
  • #68
You're entitled to your opinion, but in the real world we have to make decisions based on incomplete information. If we don't decide on which teaching method to use, the class doesn't get taught. If we don't do market research we end up supplying the wrong quantity of product at the wrong price to a market and either we sell out early and have not maximised profit or we end up writing off overstocks.

If it is your argument that decision theory is useless as in every case would would be no better off using decision theory than if we just guess what to do, I don't think that argument is supportable.
 
  • #69
verty said:
Suppose you have a classroom and you want to evaluate teaching methods. So you develop a theory about two styles of teaching and wonder which is better. Now if your theory says that the more visual method appeals to more visual learners and the more auditory method appeals to more auditory learners, and the success of either method will depend on the share of visual and auditory learners, it hasn't given you a prediction. You now need to input how many students are visual learners and how many are auditory learners. And you will get answer like this. "If 40% benefit more from visual learning and 60% benefit more from auditory learning, 40% will prefer the visual method and 60% will prefer the auditory method, and the expected value is 40% times the chosen method's effectiveness for visually preferred learners and 60% of the effectiveness of the chosen method for auditory-preferred learners. Do that calculation and you'll know which method is better." It's not a prediction. A prediction says "method X will be better". And if the theory must wait on psychological profiling of the agent, it isn't a useful theory.
The utility function is the classroom composition.

The best teaching method will depend on the class composition, there is no way to avoid that. You can find an average class composition, in the same way you can find a typical utility function* of humans. Individual persons might still have different utility functions, in the same way individual classes can have different compositions.

* this has been tried, of course, and it has been shown that humans do not follow consistent utility functions. The way equivalent choices are phrased is important, and there are systems where most prefer option A over option B, option B over option C and ... option C over option A.
 
  • #70
In a game of black jack I went with the single-time high stake bet and it paid off.

Most games are designed so that the house wins just a little more than the player.
With that logic, I decided that my best odds were to bet it all on one hand and not suffer the almost inevitable attrition of subsequent plays.
I did. I won.

And then I walked away.
 

Similar threads

Replies
9
Views
3K
Replies
9
Views
2K
  • General Math
Replies
4
Views
999
  • Set Theory, Logic, Probability, Statistics
2
Replies
53
Views
6K
Replies
2
Views
2K
  • General Math
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
7
Replies
212
Views
12K
  • Precalculus Mathematics Homework Help
Replies
3
Views
1K
  • General Math
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
7K
Back
Top