Unsolved statistics questions from other sites, part II

  • MHB
  • Thread starter chisigma
  • Start date
  • Tags
    Statistics
In summary, this conversation is discussing an unsolved statistic question posted on the website Art of Problem Solving. The question involves a game where a random number generator selects an integer between 1 and n, and the player wins n dollars if n is selected and loses 1 dollar otherwise. The player continues to press the button until they have more money than they started with or have an "even" amount. The conversation involves a member proposing a solution and discussing the expected value of the number of times the button must be pressed. There is also some confusion about the wording and specifics of the problem.
  • #106
chisigma said:
What I said in the point b) is 'half correct' and 'half wrong', in the sense that all the favourable cases are represented by both the $\displaystyle S_{3}-S_{2}- S_{1}$ and $\displaystyle S_{3}-S_{1}-S_{2}$ sequences. To understand that let suppose that the sectors have angles $\displaystyle \theta_{3} = \theta_{2}= \theta_{1}=0$, so that the no overlapping probability is of course P=1. If we consider only the sequence $\displaystyle S_{3}-S_{2}- S_{1}$ and proceed we obtain... $\displaystyle P = \frac{1}{4 \pi^{2}}\ \int_{0}^{2 \pi} d x \int_{x}^{2 \pi} d y = \frac{1}{2}$

... and that demonstrates that also the sequence $\displaystyle S_{3}-S_{1}-S_{2}$ must be taken into account. Proceeding along this way we obtain...

$\displaystyle P = \frac{1}{4 \pi^{2}}\ \int_{\frac{3}{10} \pi}^{\frac{17}{10} \pi} d x \int_{x + \frac{2}{10} \pi}^{\frac{19}{10} \pi} d y + \frac{1}{4 \pi^{2}}\ \int_{\frac{3}{10} \pi}^{\frac{17}{10} \pi} d x \int_{x + \frac{1}{10} \pi}^{\frac{18}{10} \pi} d y = 2\ \frac{49}{200} = \frac{49}{100}$

Kind regards

$\chi$ $\sigma$
Which agrees with the double checked MC estimate of \(\displaystyle 0.4900 \pm 0.0003 (2 SE)\)

Python Script:
Code:
import numpy as np

a=np.pi/10;b=2.0*np.pi/10;c=3.0*np.pi/10
N=10000000

theb=np.random.rand(1,N)*2.0*np.pi
thec=np.random.rand(1,N)*2.0*np.pi

thed=thec-theb;thee=theb-thec

test1=np.logical_and((theb>a),(theb+b<2*np.pi))
test2=np.logical_and((thec>a),(thec+c<2*np.pi))

test3=np.logical_and(np.logical_and((thed>0),(thed>b)),(thed+c<2*np.pi))
test4=np.logical_and(np.logical_and((thee>0),(thee>c)),(thee+b<2*np.pi))
test5=np.logical_or(test3,test4)

TEST0=np.logical_and(test1,test2)
TEST=np.logical_and(TEST0,test5)

PP=1.0*np.sum(TEST)/N
SE=np.sqrt(PP*(1-PP)*N)/N

print PP,SE

(errors in previous MC estimate due to still learning Python and so misusing element wise logical operators on numpy arrays - probably).
 
Last edited:
Mathematics news on Phys.org
  • #107
chisigma said:
Posted on 8 27 2013 on www.artofproblemsolving.com by the user aktyw19 and not yet solved...

Points A, B and C are randomly chosen inside a circle. A fourth point, O is chosen. What is the probability that O lies inside triangle ABC?...

The requeste probability is the ratio between the area of ABC and the area of the circle and clearly we can suppose that the circle is the unit circle. If $\displaystyle (x_{1},y_{1}), (x_{2},y_{2}),(x_{3},y_{3})$ are the coordinates of A,B and C, the area of the triangle is...

$\displaystyle A= \frac{1}{2}\ (x_{1} y_{2} - x_{2} y_{1} + x_{2} y_{3} - x_{3} y_{2} + x_{3} y_{1} - x_{1} y_{3})\ (1)$

... and now we pass to polar coordinates...

$\displaystyle x_{1}= r_{1}\ \cos \theta_{1},\ y_{1}= r_{1}\ \sin \theta_{1}$

$\displaystyle x_{2}= r_{2}\ \cos \theta_{2},\ y_{2}= r_{2}\ \sin \theta_{2}$

$\displaystyle x_{3}= r_{3}\ \cos \theta_{3},\ y_{3}= r_{3}\ \sin \theta_{3}$

It is not a limitation to suppose $\displaystyle \theta_{1}=0$, so that the (1) becomes...

$\displaystyle A = \frac{1}{2}\ \{r_{1} r_{2} \sin \theta_{2} + r_{2} r_{3} \sin (\theta_{3} - \theta_{2}) - r_{1} r_{3} \sin \theta_{3} \}\ (2)$

It is fully evident that the contribution of the first and third term into bracketts is 0, so that is...

$\displaystyle E \{A\} = 2 \int_{0}^{1} \int_{0}^{1} \int_{0}^{2 \pi} r_{2}^{2} r_{3}^{2} (\frac{1}{\pi} - \frac{x}{2})\ \sin x\ d r_{2} d r_{3} d x = \int_{0}^{1} \int_{0}^{1} \int_{0}^{2 \pi} r_{2}^{2} r_{3}^{2} |\sin x - x \cos x|_{0}^{2 \pi} d r_{2} d r_{3} = $

$\displaystyle = \pi\ \int_{0}^{1} \int_{0}^{1} r_{2}^{2} r_{3}^{2} d r_{2} d r_{3} = \frac{\pi}{3} \int_{0}^{1} r_{2}^{2}\ d r_{2} = \frac{\pi}{9}\ (3)$

... so that the requested probability is $\displaystyle P = \frac{1}{9}$...

Kind regards

$\chi$ $\sigma$
 
  • #108
Posted the 09 26 2013 on Math Help Forum - Free Math Help Forums by the user JellyOnion and not jet solved...

John is shooting at a target. His probabiltiy of hitting the target is 0.6. What is the minimum number of shots needed for the probability of John hitting the target exactly 5 times to be more then 25%?...

Kind regards

$\chi$ $\sigma$
 
  • #109
chisigma said:
Posted the 09 26 2013 on Math Help Forum - Free Math Help Forums by the user JellyOnion and not jet solved...

John is shooting at a target. His probabiltiy of hitting the target is 0.6. What is the minimum number of shots needed for the probability of John hitting the target exactly 5 times to be more then 25%?...

What we have to do is the computation of the probability of at least 5 hits in n shots...

$\displaystyle n=5,\ p= (.6)^{5} = .07776 \\ n=6,\ p = (.6)^{6} + 6\ (.6)^{5}\ (.4) = .23328 \\ n=7,\ p=(.6)^{7} + 7\ (.6)^{6}\ (.4) + 21\ (.6)^{5}\ (.4)^{2} = .419904$

... so that the minimum number of shots is n=7...

Kind regards

$\chi$ $\sigma$
 
  • #110
Posted the 09 29 2013 on www.artofproblemsolving.com by the user tensor and not jet solved...

... three points chosen randomly on a circle... find the probability that those points form a obtuse angled triangle...

Kind regards

$\chi$ $\sigma$
 
  • #111
Posted the 10 02 2013 on www.mathhelpforum.com by the user Nora314 and not yet solved...

Let X and Y be independent Poisson distributed stochastic variables, with expectation values
5 and 10, respectively. Calculate the following probabilities: X + Y > 10


Kind regards

$\chi$ $\sigma$
 
  • #112
chisigma said:
Posted the 10 02 2013 on www.mathhelpforum.com by the user Nora314 and not yet solved...

Let X and Y be independent Poisson distributed stochastic variables, with expectation values
5 and 10, respectively. Calculate the following probabilities: X + Y > 10




If we have two r.v. X and Y Poisson distributed with mean values $\lambda_{x}$ and $\lambda_{y}$ is...

$\displaystyle P \{ X = n \} = \frac{\lambda_{x}^{n}}{n!}\ e^{- \lambda_{x}},\ P \{ Y = n \} = \frac{\lambda_{y}^{n}}{n!}\ e^{- \lambda_{y}}\ (1)$If Z = X + Y is another r.v., then is... $\displaystyle P \{ Z = n \} = \sum_{k =0}^{n} \frac{\lambda_{x}^{k}}{k!}\ e^{- \lambda_{x}}\ \frac{\lambda_{y}^{n-k}}{(n-k)!}\ e^{- \lambda_{y}} = e^{- (\lambda_{x} + \lambda_{y})}\ \sum_{k = 0}^{n} \frac{\lambda_{x}^{k}\ \lambda_{y}^{n-k}}{k!\ (n-k)!} = $

$\displaystyle = \frac{e^{- (\lambda_{x} + \lambda_{y})}}{n!} \sum_{k = 0}^{n} \binom {n}{k}\ \lambda_{x}^{k}\ \lambda_{y}^{n-k} = \frac{e^{- (\lambda_{x} + \lambda_{y})}}{n!}\ (\lambda_{x} + \lambda_{y})^{n}\ (2)$... so that Z is also Poisson distributed with mean value $\displaystyle \lambda= \lambda_{x} + \lambda_{y}$. In our case is $\lambda = 15$ so that the probability that $Z \le 10$ is... $\displaystyle P = e^{- 15}\ \sum_{n=0}^{10} \frac{15^{n}}{n!} = .117634656...\ (3)$ ... and the requested probability is... $\displaystyle 1 - P = .88236534...\ (4)$Kind regards $\chi$ $\sigma$

 
  • #113
Posted the 10 08 2013 on www.mathhelpforum.com by the user Nora314 and not yet solved...

Consider a parallell system of 2 independent components. The lifetime of each component is exponentially distributed with parameter $\lambda$. Let V be the lifetime of the system. Find the distribution of V , and E(V )...

Kind regards

$\chi$ $\sigma$
 
  • #114
chisigma said:
Posted the 10 08 2013 on www.mathhelpforum.com by the user Nora314 and not yet solved...

Consider a parallell system of 2 independent components. The lifetime of each component is exponentially distributed with parameter $\lambda$. Let V be the lifetime of the system. Find the distribution of V , and E(V )...

The p.d.f. of the life T time of each component is... $\displaystyle f(t) = \lambda\ e^{- \lambda\ t},\ t \ge 0\ (1)$

... so that, if $\displaystyle V= \text {max} [T_{1},T_{2}]$, the c.d.f. of V is... $\displaystyle F_{v} (v) = P \{T_{1} \le v\}\ P \{T_{2} \le v\} = (1 - e^{- \lambda\ v})^{2} = 1 - 2\ e^{- \lambda\ v} + e^{-2\ \lambda\ v}\ (2)$

The p.d.f. of V is obtained deriving (2)... $\displaystyle f_{v} (v) = 2\ \lambda\ (e^{- \lambda\ v} - e^{- 2\ \lambda\ v})\ (3)$

... and the expected value of V is...

$\displaystyle E \{ V \} = 2\ \lambda\ \int_{0}^{\infty} v\ (e^{- \lambda\ v} - e^{- 2\ \lambda\ v})\ dv = \frac{3}{2\ \lambda}\ (4)$

Kind regards

$\chi$ $\sigma$
 
  • #115
Posted the 10 22 2013 on www.artofproblemsolving.com by the user robmath and not yet solved...

A point leaps in the Euclidean plane. Starts at (0,0) and, for each hop, if X is initially in a position, choose a vector of length 1 and uniformly random, and jumps to the position X + v. After three jumps, what is the probability that the point is in the unit disk...

Kind regards

$\chi$ $\sigma$
 
  • #116
chisigma said:
Posted the 10 22 2013 on www.artofproblemsolving.com by the user robmath and not yet solved...

A point leaps in the Euclidean plane. Starts at (0,0) and, for each hop, if X is initially in a position, choose a vector of length 1 and uniformly random, and jumps to the position X + v. After three jumps, what is the probability that the point is in the unit disk...

If we indicate a vector of modulus 1 with $\displaystyle e^{i\ \theta}$, $\theta$ being uniformely distributed from $- \pi$ and $\pi$, then the final position is represented by...

$\displaystyle s = e^{i\ \theta_{1}} + e^{i\ \theta_{2}} + e^{i\ \theta_{3}}\ (1)$

... so that the problem consists to find the probability that the quantity $\displaystyle |s|$ is less than 1. It is convenient to break the work in several parts and we first suppose to sum two vectors...

$\displaystyle u = 1 + e^{i\ \theta_{1}}\ (2)$

... with $\displaystyle \theta_{1}$ uniformely distributed from $- \pi$ to $\pi$ and to find the p.d.f. of $\displaystyle X= |u|$. Is...

$\displaystyle X = \sqrt{(1 + e^{i\ \theta_{1}})\ (1 + e^{- i\ \theta_{1}})} = \sqrt{2\ (1+ \cos \theta_{1})} = 2\ \cos \frac{\theta_{1}}{2}\ (3)$

... then...

$\displaystyle P \{ X < x\} = \frac{2}{\pi}\ \int_{\cos^{-1} \frac{x}{2}}^{\frac{\pi}{2}} d \theta_{1} = 1 - \frac{2}{\pi}\ \cos^{-1} \frac{x}{2}\ (4)$

... with $\displaystyle 0 < x < 2$ and the p.d.f. of X is...

$\displaystyle f_{X} (x) = \frac{1}{\pi\ \sqrt {1 - \frac{x^{2}}{4}}}\ (5)$

As second step we compute the r.v. Y defined as...

$\displaystyle Y = |X + e^{i\ \theta_{2}}| = (X + e^{i\ \theta_{2}})\ (X + e^{- i\ \theta_{2}}) = X^{2} +2\ X\ \cos \theta_{2} + 1\ (6)$

Now what we have to do is to evaluate the probability $\displaystyle P \{Y < 1\}$ and observing (6) we realize that it ie equal to the probability $\displaystyle P \{X < -2\ \cos \theta_{2}\} $ that is...

$\displaystyle P = \int \int_{A} f_{X} (x)\ f_{Y} (y)\ dy\ dx\ (7)$

... where $f_{X} (x)$ is given by the (5), $f_{Y} (y)$ is given by...

$\displaystyle f_{Y} (y) = \frac{1}{2\ \pi \sqrt{1 - \frac{y^{2}}{4}}}\ (8)$

... and A is the region coloured in yellow in the figure... http://www.123homepage.it/u/i78946596._szw380h285_.jpg.jfif

Explicit computation of P is...

$\displaystyle P = \frac{1}{2\ \pi^{2}}\ \int_{0}^{2}\ \int_{-2}^{- x}\ \frac{d y\ d x}{\sqrt{1 - \frac{x^{2}}{4}}\ \sqrt{1- \frac{y^{2}}{4}}} = \frac{1}{2\ \pi^{2}}\ \int_{0}^{2} \frac{\pi - 2\ \sin^{-1} \frac{x}{2}}{\sqrt{1- \frac{x^{2}}{4}}}\ dx = \frac{\pi^{2}}{4\ \pi^{2}} = \frac{1}{4}\ (9)$

Kind regards

$\chi$ $\sigma$
 
Last edited:
  • #117
Posted the 11 06 2013 on www.artofproblemsolving.com by the user MANMAID and not yet solved...

Suppose two teams play a series of games, each producing a winner and loser, until one team has won two more games than the other. Let G be the total number of games played. Assume each team has a chance of 0.5 to win each game, independent of the results of the previous games.

find the probability distribution of G.
find the expected value of G.


Kind regards

$\chi$ $\sigma$
 
  • #118
chisigma said:
Posted the 11 06 2013 on www.artofproblemsolving.com by the user MANMAID and not yet solved...

Suppose two teams play a series of games, each producing a winner and loser, until one team has won two more games than the other. Let G be the total number of games played. Assume each team has a chance of 0.5 to win each game, independent of the results of the previous games.

find the probability distribution of G.
find the expected value of G.

In a slighly different form we have the most classical of the problems concerning Markov Chains. The number of states is five and we can call them simply with 0,1,2,3,4. The initial state is 0 and two two adsorbing states are 3 and 4. The probability transition matrix written in 'canonical form' is...

$\displaystyle P = \left | \begin{matrix} 0 & \frac{1}{2} & \frac{1}{2} & 0 & 0 \\ \frac{1}{2} & 0 & 0 & \frac{1}{2} & 0 \\ \frac{1}{2} & 0 & 0 & 0 & \frac{1}{2} \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{matrix} \right| $ (1)

Setting $\displaystyle g(n) = P \{ G=n\}$ it is to see that g(n) = 0 for n odd. For n even, setting $\displaystyle s(n) = a_{0,3} + a_{0,4}$ the top right elements of the matrix $P^{n}$, is $p(n)= s(n)- s(n-2)$. Proceeding we have...

$\displaystyle s(0) = 0\ -> p(0)=0$

$\displaystyle s(2) = \frac{1}{2}\ -> p(2)= \frac{1}{2}$

$\displaystyle s(4) = \frac{3}{4}\ -> p(4)= \frac{1}{4}$

$\displaystyle s(6) = \frac{7}{8}\ -> p(6)= \frac{1}{8}$

$\displaystyle s(8) = \frac{15}{16}\ -> p(8)= \frac{1}{16}$

... and it is fully evident that $\displaystyle p(2\ n)= \frac{1}{2^{n}}$. The expected value of G is therefore...

$\displaystyle E \{ G \} = 2\ \sum_{n=1}^{\infty} \frac {n}{2^{n}} = 4\ (1)$

Kind regards

$\chi$ $\sigma$
 
  • #119
Posted the 11 25 2013 on www.artofproblemsolving.com by the user erbed and not yet solved...

John has n dollars and he starts flipping a fair coin. Each time the result is a head he wins one dollar, otherwise he loses one dollar. This 'game' ends when he reaches the amount of m > n dollars or when he loses all his money. What is the probability that he wins this game?... [Give the answer in terms of n and m...] Kind regards $\chi$ $\sigma$
 
  • #120
Posted the 11 27 2013 on www.talkstats.com by the user TrueTears and not yet solved...Let Z = X + Y where $\displaystyle X \sim N (\mu,\sigma^2)$ and $\displaystyle Y \sim \Gamma (k, \theta)$. Also assume X and Y are independent. Then what is the distribution (pdf) of Z?...

Kind regards

$\chi$ $\sigma$
 
  • #121
Posted the 12 04 2013 on www.artofproblemsolving.com by the user herrmann and not yet solved...... You have a die with 10 sides, number ranging from 1 to 10. Each number comes up with equal possibility. You sum the numbers you get until the sum is greater than 100. What's the expected value of Your sum?...

Kind regards

$\chi$ $\sigma$
 
  • #122
chisigma said:
Posted the 12 04 2013 on www.artofproblemsolving.com by the user herrmann and not yet solved...... You have a die with 10 sides, number ranging from 1 to 10. Each number comes up with equal possibility. You sum the numbers you get until the sum is greater than 100. What's the expected value of Your sum?...

If the sum is greater than 100 at the k cast, then is $101 \le S(k) \le 110$ and therefore is $91 \le S(k-1) \le 100$. Supposing that the possible values of S(k-1) have all the same probability $p = \frac{1}{10}$, then the same is for the S(k), so that the expected value is... $\displaystyle E \{S(k)\} = \frac{101 + 102 + ... + 110}{10} = 105.5$Kind regards $\chi$ $\sigma$
 
  • #123
Posted the 01 07 2014 on www.artofproblemsolving.com by the user herrmann and not yet solved...

... what is the probability that in a group of k people there are at least two persons that have birthday on the same date, or on the consecutive dates [here we are considering that January 1 is consecutive to December 31]?...

Kind regards

$\chi$ $\sigma$
 
  • #124
chisigma said:
Posted the 01 07 2014 on www.artofproblemsolving.com by the user herrmann and not yet solved...

... what is the probability that in a group of k people there are at least two persons that have birthday on the same date, or on the consecutive dates [here we are considering that January 1 is consecutive to December 31]?...

Let's suppose to have one of k people who was born the first of June. If k=2 then the probability that the second fellow wasn't born in the period 31 May-2 June is $\frac{362}{365}$. If k=3 then the probability that the third fellow wasn't born in the period 30 May- 3 June is $\frac{362}{365}\ \frac{360}{365}$. Proceeding in that way we find that the required probability is... $\displaystyle P_{1}(k) = \frac{362}{365}\ \frac{360}{365}\ ... \ \frac{365 - 2\ k + 1}{365} = \frac{(362)!}{365^{k - 1} (364 - 2\ k)!} = \frac{181!\ 2^{k - 1}}{365^{k - 1}\ (182 - k)!}\ (1) $

In the 'classical case' k=23 the probability that all people were born in different days is $P_{0} (23)= .49...$. In this case is $P_{1} (23) = .22...$... Kind regards $\chi$ $\sigma$
 
  • #125
Poste the 01 29 2014 on http://www.mymathforum.com by the user 20824 and not yet solved...

Suppose that X is an exponentially distributed r.v. with unknown parameter $\lambda$ and a random error $\epsilon$ is added to X to give $Y = X + \epsilon$. $\epsilon$ is equal to 0 with probability p and 1 with probability 1-p. What is the distribution of X conditional on Y?...

Kind regards

$\chi$ $\sigma$
 
  • #126
Posted some years ago on an Italian math forum and not solved...

A maker for the promotion of his product includes in each pop corn pakage a prize [a colored pencil, a dummy animal, a pitcure card, etc...] randomly choosen among n different types. What is the expected number of pakages to be purchased if one wants to have the entire set of prizes?...

Kind regards$\chi$ $\sigma$
 
  • #127
chisigma said:
Posted some years ago on an Italian math forum and not solved...

A maker for the promotion of his product includes in each pop corn pakage a prize [a colored pencil, a dummy animal, a pitcure card, etc...] randomly choosen among n different types. What is the expected number of pakages to be purchased if one wants to have the entire set of prizes?...

In the original post it has been proposed n=6, but it is better to try to analyse the general case. The failure of the attempts to solve the problem is probably due to the fact that nobody realized that in fact this is a Markov Chain problem with n states. In fact in the first purchase in any case one of the price is acquired and the missing prizes are n-1. At the second purchase or one finds the same price ad in the first and no progress is made, or one adds at his collection a new price and the missing prizes are n-2. Proceeding in this way the 'game' finish when the final state n is met. The state diagram is represented in figure...

i85969389._szw380h285_.jpg



The transition matrix is...

$\displaystyle P = \left | \begin{matrix} \frac{1}{n}& \frac{n-1}{n} & 0 & 0 & \cdot & \cdot & \cdot & 0 & 0 & 0 \\ 0& \frac{2}{n} & \frac{n-2}{n} & 0 & \cdot & \cdot & \cdot & 0 & 0 & 0 \\ 0 & 0 & \frac{3}{n} & \frac{n-3}{n} & \cdot & \cdot & \cdot & 0 & 0 & 0 \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ 0 & 0 & 0 & 0 & \cdot & \cdot & \cdot & \frac{3}{n} & 0 & 0 \\ 0 & 0 & 0 & 0 & \cdot & \cdot & \cdot & \frac {n-2}{n} & \frac{2}{n} & 0 \\ 0 & 0 & 0 & 0 & \cdot & \cdot & \cdot & 0 & \frac{n-1}{n} & \frac{1}{n} \\ 0& 0 & 0 & 0 & \cdot & \cdot & \cdot & 0 & 0 & 1\end{matrix} \right| $ (1)

Now we proceed as in...

http://mathhelpboards.com/basic-probability-statistics-23/expected-number-questions-win-game-4154.html#post18909

... starting by n=2 and finding A(n), i.e. the mean number of purchase necessary to acquire the entire set of prizes...

n=2

In this case is...

$\displaystyle Q = \frac{1}{2} \implies I - Q = \frac{1}{2} \implies (I_{1} - Q)^{- 1} = 2 \implies A(2)= 1 + 2 = 3$

n=3

In this case is...

$\displaystyle Q = \left | \begin{matrix} \frac{1}{3} & \frac{2}{3} \\ 0 & \frac{2}{3} \end{matrix} \right | \implies I - Q = \left | \begin{matrix} \frac{2}{3} & - \frac{2}{3} \\ 0 & \frac{1}{3} \end{matrix} \right | \implies (I - Q)^{-1} = \left | \begin{matrix} \frac{3}{2} & 3 \\ 0 & 3 \end{matrix} \right | \implies A(3) = 1 + 3 + \frac{3}{2} = \frac{11}{2}$

n=4

In this case is...

$\displaystyle Q = \left | \begin{matrix} \frac{1}{4} & \frac{3}{4} & 0 \\ 0 & \frac{1}{2} & \frac{1}{2} \\ 0 & 0 & \frac{3}{4} \end{matrix} \right | \implies I - Q = \left | \begin{matrix} \frac{3}{4} & - \frac{3}{4} & 0 \\ 0 & \frac{1}{2} & - \frac{1}{2} \\ 0 & 0 & \frac {1}{4} \end{matrix} \right | \implies (I - Q)^{-1} = \left | \begin{matrix} \frac{4}{3} & 2 & 4 \\ 0 & 2 & 4 \\ 0 & 0 & 4 \end{matrix} \right | \implies A(4) = 1 + 2 + 4 + \frac{4}{3} = \frac{25}{3}$


n=5

In this case is...

$\displaystyle Q = \left | \begin{matrix} \frac{1}{5} & \frac{4}{5} & 0 & 0 \\ 0 & \frac{2}{5} & \frac{3}{5} & 0 \\ 0 & 0 & \frac{3}{5} & \frac{2}{5} \\ 0 & 0 & 0 & \frac{4}{5} \end{matrix} \right | \implies I - Q = \left | \begin{matrix} \frac{4}{5} & - \frac{4}{5} & 0 & 0 \\ 0 & \frac{3}{5} & - \frac{3}{5} & 0 \\ 0 & 0 & \frac {2}{5} & - \frac{2}{5} \\ 0 & 0 & 0 & \frac{1}{5} \end{matrix} \right | \implies (I - Q)^{-1} = \left | \begin{matrix} \frac{5}{4} & \frac{5}{3} & \frac{5}{2} & 5 \\ 0 & \frac{5}{3} & \frac{5}{2} & 5 \\ 0 & 0 & \frac{5}{2} & 5 \\ 0 & 0 & 0 & 5 \end{matrix} \right | \implies $

$\displaystyle \implies A(5) = 1 + \frac{5}{4} + \frac{5}{3} + \frac{5}{2} + 5 = \frac{137}{12}$

n=6

In this case is...

$\displaystyle Q = \left | \begin{matrix} \frac{1}{6} & \frac{5}{6} & 0 & 0 & 0 \\ 0 & \frac{1}{3} & \frac{2}{3} & 0 & 0 \\ 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0 \\ 0 & 0 & 0 & \frac{2}{3} & \frac{1}{3} \\ 0 & 0 & 0 & 0 & \frac{5}{6} \end{matrix} \right | \implies I - Q = \left | \begin{matrix} \frac{5}{6} & - \frac{5}{6} & 0 & 0 & 0 \\ 0 & \frac{2}{3} & - \frac{2}{3} & 0 & 0 \\ 0 & 0 & \frac {1}{2} & - \frac{1}{2} & 0 \\ 0 & 0 & 0 & \frac{1}{3} & - \frac{1}{3} \\ 0 & 0 & 0 & 0 & \frac{1}{6} \end{matrix} \right | \implies$

$\displaystyle \implies (I - Q)^{-1} = \left | \begin{matrix} \frac{6}{5} & \frac{3}{2} & 2 & 3 & 6 \\ 0 & \frac{3}{2} & 2 & 3 & 6\\ 0 & 0 & 2 & 3 & 6 \\ 0 & 0 & 0 & 3 & 6 \\ 0 & 0 & 0 & 0 & 6 \end{matrix} \right | \implies A(6) = 1 + \frac{6}{5} + \frac{3}{2} + 2 + 3 + 6 = \frac{147}{10}$

Observing the result we have obtained it seems not to be necessary to proceed with n>6 and with great probability we can conclude that the general result is...

$\displaystyle A(n) = n\ \sum_{k=1}^{n} \frac{1}{k} = n\ H_{n}\ (2)$

... and (2) is an interesting result which can be useful in many application fields...

Kind regards

$\chi$ $\sigma$
 
  • #128
Posted the 03 21 2014 on www.matematicamente.it by the user biglio23 [original in Italian...] and not yet solved...

In a restaurant n people live their umbrellas at the entrance. The first person leaving the restaurant chooses randomly an umbrella. The successive people take their umbrella if they find it, otherwise choose randomly an umbrella. What is the probability that the last person finds his own umbrella?...

Kind regards

Kind regards

$\chi$ $\sigma$
 
  • #129
chisigma said:
Posted the 03 21 2014 on www.matematicamente.it by the user biglio23 [original in Italian...] and not yet solved...

In a restaurant n people live their umbrellas at the entrance. The first person leaving the restaurant chooses randomly an umbrella. The successive people take their umbrella if they find it, otherwise choose randomly an umbrella. What is the probability that the last person finds his own umbrella?...

Also this problem can be considered as a Markov Chain type. If n is the persons leaving the restaurant, then the number of states in n+1 and the process ends after at least n-1 steps in the adsorbing state OK [the last person finds his own umbrella...] or KO {the last person doesn't...]. The state diagram is shown in the figure...

i87503743._szw380h285_.jpg.jfif


In the first step we have three possibilities...

a) the first person chooses his own umbrella and the process ends in OK...

b) the first person chooses the umbrella of the last person and the process ends in KO...

c) the first person chooses an umbrella different from a) and b) and the process goes to further step...What is important to see is that in case c) the remaining process is the same of the original process with n-1 person, so that the final result of the two process are the same, i.e. the result is independent from n. Taking into account that we can choose n=3, in which case the transition matrix is...

$\displaystyle A = \left | \begin{matrix} 0 & \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\ 0 & 0 & \frac{1}{2} & \frac{1}{2} \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{matrix} \right |\ (1)$

The probability that the process ends in OK is the term $a_{1,4}$ of the matrix $A^{2}$, i.e. ... $\displaystyle P = 0 + \frac{1}{6} + 0 + \frac{1}{3} = \frac{1}{2}\ (2)$

Kind regards

$\chi$ $\sigma$
 
  • #130
Posted the 03 19 2014 on www.artofproblemsolving.com by the user Tetrapak1234 and not yet solved...

Two independent r.v. X and Y are given, both with p.d.f. $\displaystyle f(x) = \frac{1}{\pi\ (1 + x^{2})}$. Let be Z a r.v. defined as...

$\displaystyle Z =\begin{cases}Y &\text{if}\ |Y|>1 \\ - Y &\text{if}\ |Y| \le 1\end{cases}$

... and V = X + Z. Find the p.d.f. $\displaystyle f_{V} (x)$...
Kind regards$\chi$ $\sigma$
 
  • #131
chisigma said:
Posted the 03 19 2014 on www.artofproblemsolving.com by the user Tetrapak1234 and not yet solved...

Two independent r.v. X and Y are given, both with p.d.f. $\displaystyle f(x) = \frac{1}{\pi\ (1 + x^{2})}$. Let be Z a r.v. defined as...

$\displaystyle Z =\begin{cases}Y &\text{if}\ |Y|>1 \\ - Y &\text{if}\ |Y| \le 1\end{cases}$

... and V = X + Z. Find the p.d.f. $\displaystyle f_{V} (x)$...

The first part is trivial because, given the symmetry of the p.d.f. of Y around x=0, Y and Z have the same p.d.f. and the problem consists in finding the p.d.f. of the sum of X and Y. The Fourier Transform of $f_{X}$ is given by... $\displaystyle \mathcal{F} \{f_{X}(x)\} = \frac{2}{\pi}\ \int_{0}^{\infty} \frac{\cos (\omega\ x)}{1+x^{2}}\ dx = e^{- |\omega|}\ (1) $

...so that is...

$\displaystyle f_{V} (x) = \mathcal{F}^{-1} \{e^{-2\ |\omega|}\} = \frac{1}{\pi} \int_{0}^{\infty} e^{- 2\ \omega}\ \cos (\omega\ x)\ d \omega = \frac{1}{\pi\ [1 + (\frac{x}{2})^{2}]}\ (2)$

Kind regards

$\chi$ $\sigma$
 
  • #132
Posted [in a bit different form...] the 03 17 2014 on www.artofproblemsolving.com by the user herrmann and not yet solved... You are playing a following game: In every move you roll a regular dice (1-6) and your current account is sum of all dice rolls. For example, if in first roll you get 3, in second roll 5, then your current account is 8. In every move you can either re-roll the dice or you can take all from your current account and finish the game. The only problem is that, if after some roll you have amount on your account that is a multiple of 6, you lose everything. What is the first number after that it is better to take everything from your account than to re-roll, and what is the expected value of this game if played optimal?...

Kind regards

$\chi$ $\sigma$
 
  • #133
chisigma said:
Posted [in a bit different form...] the 03 17 2014 on www.artofproblemsolving.com by the user herrmann and not yet solved... You are playing a following game: In every move you roll a regular dice (1-6) and your current account is sum of all dice rolls. For example, if in first roll you get 3, in second roll 5, then your current account is 8. In every move you can either re-roll the dice or you can take all from your current account and finish the game. The only problem is that, if after some roll you have amount on your account that is a multiple of 6, you lose everything. What is the first number after that it is better to take everything from your account than to re-roll, and what is the expected value of this game if played optimal?...

After n dice rolls the probability to be 'play on' is $\displaystyle P_{n} = (\frac{5}{6})^{n}$, so that the expected value of the gain in this situation is... $\displaystyle G_{1} = \frac{15}{5}\ P_{1} = \frac{5}{2} = 2.5$

$\displaystyle G_{2} = \frac {15 - 1 + 45}{9}\ P_{2} = \frac{1475}{324} = 4.552...$

$\displaystyle G_{3} = \frac{15 - 3 + 45 + 75}{13}\ P_{3} = \frac{16500}{2808} = 5.876...$

$\displaystyle G_{4} = \frac{15-6 + 45 + 75 + 105}{17}\ P_{4} = \frac{146250}{22032} = 6.638...$

$\displaystyle G_{5} = \frac{15 - 10 + 45 + 75 + 105 + 135}{21}\ P_{5} = \frac{1141625}{163296} = 6.985...$

$\displaystyle G_{6} = \frac{45 + 75 + 105 + 135+ 195}{25}\ P_{6} = \frac{8671875}{1166400} = 7.4347...$

$\displaystyle G_{7} = \frac{45 - 7 + 75 + 105 + 135 + 195 + 15 + 216}{29}\ P_{7} = \frac{60859375}{8118144} = 7.496...$

$\displaystyle G_{8} = \frac{45 - 15 + 75 + 105 + 135 + 195 + 231 + 15 + 240}{33}\ P_{8} = \frac{400781250}{55427328} = 7.23...$

At this point our job is finshed and we can coclude that the optimal strategy' is to stop the play after the 7-th dice roll because if n=8 the expected gain decreases...

Kind regards

$\chi$ $\sigma$
 
  • #134
Posted the Jan 28, 2013 on Art of Problem Solving (AoPS) by the user thugzmath10 and not yet solved

$A$ and $B$ are points on a circle centered at $O$ and radius $2$. An arbitrary point $X$ lies on the major arc $AB$. Determine the probability that $[AXB]\geq \sqrt{6}$.

Note: $[AXB]$ denotes the area of triangle $AXB$.
 

Similar threads

Replies
90
Views
61K
Replies
69
Views
16K
Replies
11
Views
2K
Replies
2
Views
10K
Replies
13
Views
2K
Replies
7
Views
3K
Replies
7
Views
3K
Replies
17
Views
7K
Replies
2
Views
2K
Back
Top