Conditional PDF question -- I think anyway....

In summary, the given conversation discusses the probability of success and failure in taking a pass-fail test repeatedly, as well as the relationship between these probabilities as the number of attempts increases. Based on the given information, the probability of passing the test on the first try is 1/2, and this probability decreases as the number of attempts increases. The probability of taking the test more than twice is 1/4, and the probability of taking the test exactly twice is 3/4. A formula for the probability of taking the test k times is 1 - (1/2)^k.
  • #1
whitejac
169
0

Homework Statement


Suppose you take a pass-fail test repeatedly. Let Sk be the event that you are successful in your kth try, and Fk be the event that you fail the test in your kth try. On your first try, you have a 50% chance of passing the test.

P(S1)=1−P(F1)=1/2.
Assume that as you take the test more often, your chance of failing the test goes down. In particular,
P(Fk)=1/2*P(Fk-1), for k=2,3,4,⋯

However, the result of different exams are independent. Suppose you take the test repeatedly until you pass the test for the first time. Let X be the total number of tests you take, so Range(X)={1,2,3,⋯}.
  1. Find P(X=1),P(X=2),P(X=3).
  2. Find a general formula for P(X=k) for k=1,2,⋯.
  3. Find the probability that you take the test more than 2 times.
  4. Given that you take the test more than once, find the probability that you take the test exactly twice.

Homework Equations


I would say the geometric PDF: Px(k) = p(q)k-1 for integer k's, but after doing the problem I do not think it is exactly like that.
P(A|B) = P(A∩B) / P(B)

The Attempt at a Solution


This problem seemed a little too straightforward for my comfort zone, so I'm seeing if anyone will check my logic here.
P(X = 1): this is easy, it's given by the problem to be 1/2 chance that we take this test 1 time.
P(X = 2): If I solved the first part correctly, then this is equally easy - we have S2 = 1 - (1/2)(1/2) = 3/4
P(X = 3): Similarly, S3 = 7/8

This implies that P(X = k) = 1 - 1/22

Finding the probability that the test will be taken more than 2 times would be the probability that we fail on the second time which is F2 = 1/2 * F1 = 1/4

Finding the probability of 2 tests given more than 1 test is trickier for me...
Finding P(A∩B) is the intersection between X = 2 and X > 1, which is X = 2. Dividing this probability by the probability of X > 1 then is where I realize that either my initial answers were wrong or my interpretation of part 4 is wrong, but I'm leaning more towards part 1 being wrong:
P(A∩B) / P(B) = P(S2) / P(F1) = (3/4) / (1/2) = 6/4 which is impossible.
 
Physics news on Phys.org
  • #2
The problem is a little unclear the way it is worded and you made a typo but go back to the first question:
What is the probability of P(X=2)? You gave the probability of success on the 2nd test only but we want to know the probability that you would actually end up failing the first test and then passing the 2nd test.
 
  • #3
So, you're saying that rather than consider this as P(k = 2), which is what I think you're saying I did, then I should interpret P(X=2) as P(1 ∩ 2), which would be P(A)P(B)?
This is how I am understanding the problem you have with my approach - and it's one that I am feeling more certain about because the only way for X = 2 to occur would be to surpass X = 1, unlike what I said which was simply the probability of the kth term alone.
 
  • #4
Yes, I think that is what they want.
 
  • #5
Doesn't this make question part 4 a little bit redundant? The only way for P(X=2) to occur would be for P(X > 1) to occur, and we are not interested in the probability of anything beyond 2.
 
  • #6
Except they added a conditional probability that you are taking it more than once.
 
  • #7
That's what I thought the first time. This would be the conditional probability I mentioned in the equations section because the only requirement for (X > 1) would be the condition that F1 occurs.
 
  • #8
Yes, but you didn't get the first question right so you got the wrong answer.
 
  • #9
whitejac said:

Homework Statement


Suppose you take a pass-fail test repeatedly. Let Sk be the event that you are successful in your kth try, and Fk be the event that you fail the test in your kth try. On your first try, you have a 50% chance of passing the test.

P(S1)=1−P(F1)=1/2.
Assume that as you take the test more often, your chance of failing the test goes down. In particular,
P(Fk)=1/2*P(Fk-1), for k=2,3,4,⋯

However, the result of different exams are independent. Suppose you take the test repeatedly until you pass the test for the first time. Let X be the total number of tests you take, so Range(X)={1,2,3,⋯}.
  1. Find P(X=1),P(X=2),P(X=3).
  2. Find a general formula for P(X=k) for k=1,2,⋯.
  3. Find the probability that you take the test more than 2 times.
  4. Given that you take the test more than once, find the probability that you take the test exactly twice.

Homework Equations


I would say the geometric PDF: Px(k) = p(q)k-1 for integer k's, but after doing the problem I do not think it is exactly like that.
P(A|B) = P(A∩B) / P(B)

The Attempt at a Solution


This problem seemed a little too straightforward for my comfort zone, so I'm seeing if anyone will check my logic here.
P(X = 1): this is easy, it's given by the problem to be 1/2 chance that we take this test 1 time.
P(X = 2): If I solved the first part correctly, then this is equally easy - we have S2 = 1 - (1/2)(1/2) = 3/4
P(X = 3): Similarly, S3 = 7/8

This implies that P(X = k) = 1 - 1/22

Finding the probability that the test will be taken more than 2 times would be the probability that we fail on the second time which is F2 = 1/2 * F1 = 1/4

Finding the probability of 2 tests given more than 1 test is trickier for me...
Finding P(A∩B) is the intersection between X = 2 and X > 1, which is X = 2. Dividing this probability by the probability of X > 1 then is where I realize that either my initial answers were wrong or my interpretation of part 4 is wrong, but I'm leaning more towards part 1 being wrong:
P(A∩B) / P(B) = P(S2) / P(F1) = (3/4) / (1/2) = 6/4 which is impossible.

I think the best way to interpret ##P(F_k)## is as an absolute, not conditional probability. Presumably, you can write the test as many times as you want; it is just that after passing it on some trial, all further results would be ignored in assigning marks to you. It is as though you were writing additional exams just for the fun of doing it.

Of course, in the actual process you would only write Exam ##k## if you failed Exams ##1 \rightarrow k-1##; but your chances of passing/failing Exam ##k## would not be influenced by previous passes or failures. So, ##P(F_k) = 1/2^k## for ##k = 1,2,3, \ldots ##. Naturally, we also have ##P(S_k) = 1 \: - \: 1/2^k##, but that is not equal to ##P(X = k)##; can you see why?

For question 3, you need to find ##P(X > 2) = P(X = 3) + P(X = 4) + P(X = 5) + \cdots ##. This is an infinite series. Can you see a way to reduce the problem to one having a finite summation that is readily computable?

For question 3, you want ##P(X = 2|X > 1) = P(X = 2\; \cap \; X > 1) / P(X > 1) = P(X = 2)/P(X > 1)##.
 
  • #10
Ray Vickson said:
I think the best way to interpret P(Fk)P(F_k) is as an absolute, not conditional probability. Presumably, you can write the test as many times as you want; it is just that after passing it on some trial, all further results would be ignored in assigning marks to you. It is as though you were writing additional exams just for the fun of doing it.

Of course, in the actual process you would only write Exam kk if you failed Exams 1→k−11 \rightarrow k-1; but your chances of passing/failing Exam kk would not be influenced by previous passes or failures. So, P(Fk)=1/2kP(F_k) = 1/2^k for k=1,2,3,…k = 1,2,3, \ldots . Naturally, we also have P(Sk)=1−1/2kP(S_k) = 1 \: - \: 1/2^k, but that is not equal to P(X=k)P(X = k); can you see why?
I don't really see why yet... I read through my book more thoroughly and looked at a similar example that was the opposite:
PX(k) = 1/2k
which they proved FX(k) to be (2k - 1) / (2k) which yielded the same result as 1 - (1/2k) they did this using
x<1, FX(x) = 0
1 ≤x< 2, FX(x) = PX(1) = 1/2
2≤x< 3, FX(x) = PX(1) + PX(2) = 1/2 + 1/4 = 3/4...

This was extended to a part B that found P(2< X ≤ 5) which was
FX(5) - FX(2) = 31/32 - 3/4 = 7/32
or
PX(3) + PX(4) + PX(5) = 1/8 + 1/16 + 1/32 = 7/32

Now, if I extend this general problem and find the similarities between ours and theirs, I believe I do... But not entirely.
For my problem, then, we could define FX(x) as (1 - 1/2k), which is a nomenclature I'm still a little unsure of, but I believe means "the function for the Random Variable given x"... Then, using a definition in my book: P( a < X ≤ b) = FX(b) - FX(a):
P(X = 2) to be P(1 < x ≤ 2) = (1 - 1/22) - (1 - 1/2) ) = 1/4 => 1/2k = P(X = k) - which I suppose makes more sense numerically than my original answer because for larger value k it would seem less likely to ever happen because every other value had such an exponentially likely chance of happening as well.
 
  • #11
whitejac said:
I don't really see why yet... I read through my book more thoroughly and looked at a similar example that was the opposite:
PX(k) = 1/2k
which they proved FX(k) to be (2k - 1) / (2k) which yielded the same result as 1 - (1/2k) they did this using
x<1, FX(x) = 0
1 ≤x< 2, FX(x) = PX(1) = 1/2
2≤x< 3, FX(x) = PX(1) + PX(2) = 1/2 + 1/4 = 3/4...

This was extended to a part B that found P(2< X ≤ 5) which was
FX(5) - FX(2) = 31/32 - 3/4 = 7/32
or
PX(3) + PX(4) + PX(5) = 1/8 + 1/16 + 1/32 = 7/32

Now, if I extend this general problem and find the similarities between ours and theirs, I believe I do... But not entirely.
For my problem, then, we could define FX(x) as (1 - 1/2k), which is a nomenclature I'm still a little unsure of, but I believe means "the function for the Random Variable given x"... Then, using a definition in my book: P( a < X ≤ b) = FX(b) - FX(a):
P(X = 2) to be P(1 < x ≤ 2) = (1 - 1/22) - (1 - 1/2) ) = 1/4 => 1/2k = P(X = k) - which I suppose makes more sense numerically than my original answer because for larger value k it would seem less likely to ever happen because every other value had such an exponentially likely chance of happening as well.

Maybe the book is doing a different example, but that does not apply in the current case. You need to think it through, not just follow the book. If X = toss number at which the first 'H' occurs, we have
[tex] P(\text{get heads eventually}) = P(X = 1) + P(X = 2) + P(X = 3) + \cdots [/tex]
This is true because the first head will occur either on toss 1 or toss 2 or toss 3 ... . That means that the events ##\{X = 1\}, \{X = 2 \}, \{ X = 3 \}, \ldots ## are disjoint, and so their probabilities can be safely added together to get "heads eventually'.

If you think that ##P(X = 1) = 1/2, P(X = 2) = 1-(1/4), P(X = 3) = 1 - (1/8), \ldots ## what do you get for the sum? Well, already by ##k = 2## you would get a probability > 1 (because ##1/2 + (1 - 1/4) = 1.25##, etc), and the sum just gets more and more above 1 as you take more terms. So, ##P(X = k) = P(H_k) = 1 - 2^{-k}## cannot possibly be right.
 
  • #12
I saw why it is unable to be 1-1/2^k because, like you said, if we are to take the sum of probabilities until our numbers then we must stay below 1 and that does not. From trying to use the definition, rather than explicitely the example, I proposed that FX(2) - FX(1) would be equal to P(1 < x ≤ 2) which would give me the probability of P(x = 2). I organized this because, like you said, we have too choices repeatedly tested - pass or fail, heads or tails, and each test has a higher chance to succeed. Now, in the example and the definitions proven through the section they assigned an equation FX(x) that would be related in some way to an equation PX(x). These I thought were pass/fail designations, from the example, but if I am mistaking the nomenclature then I didn't mean to. My book never formally named F or P, they just introduced them as a method of discussing PDF's and CDF's.

Nevertheless, I can attempt to derive it logically: If we take this pass fail 'test' as P(pass eventually) = P(x=1) + P(x=2) + ... P(x=k)...
Then I am a little bit confused about this problem. It seems to me that we have 3 options on interpreting the sum of probabilities:
1) Adding up the probability of each success - we've already proven this cannot work as it grows too large.
2) Adding up the probability of each failure - this will give us a surefire way of ensuring each test occurs and it definitely sums 1. Ex: To even reach X = 9, we must fail 8 times and thus P(fail 8 times) = 1/2 + 1/4 + 1/8... however, this does not mean X = 9, because the only way we can stop the test would be to have the test succeed... which gives way to
3) Adding up the failures plus the final succcess - this interpretation would give me the most 'accurate' interpretation because as I see it... You add up the chance to fail until X reaches its 9th test, then you're given two choices - pass or fail. Now, failure is very small, 1/2^8, but that means that adding the chance to pass is also too large.

This final option resembles the Geometric distribution ∑pq^k, but if I put in the pass/fail constraints then I calculated the series to sum up to about 1/3 instead of 1...
 
  • #13
whitejac said:
I saw why it is unable to be 1-1/2^k because, like you said, if we are to take the sum of probabilities until our numbers then we must stay below 1 and that does not. From trying to use the definition, rather than explicitely the example, I proposed that FX(2) - FX(1) would be equal to P(1 < x ≤ 2) which would give me the probability of P(x = 2). I organized this because, like you said, we have too choices repeatedly tested - pass or fail, heads or tails, and each test has a higher chance to succeed. Now, in the example and the definitions proven through the section they assigned an equation FX(x) that would be related in some way to an equation PX(x). These I thought were pass/fail designations, from the example, but if I am mistaking the nomenclature then I didn't mean to. My book never formally named F or P, they just introduced them as a method of discussing PDF's and CDF's.

Nevertheless, I can attempt to derive it logically: If we take this pass fail 'test' as P(pass eventually) = P(x=1) + P(x=2) + ... P(x=k)...
Then I am a little bit confused about this problem. It seems to me that we have 3 options on interpreting the sum of probabilities:
1) Adding up the probability of each success - we've already proven this cannot work as it grows too large.
2) Adding up the probability of each failure - this will give us a surefire way of ensuring each test occurs and it definitely sums 1. Ex: To even reach X = 9, we must fail 8 times and thus P(fail 8 times) = 1/2 + 1/4 + 1/8... however, this does not mean X = 9, because the only way we can stop the test would be to have the test succeed... which gives way to
3) Adding up the failures plus the final succcess - this interpretation would give me the most 'accurate' interpretation because as I see it... You add up the chance to fail until X reaches its 9th test, then you're given two choices - pass or fail. Now, failure is very small, 1/2^8, but that means that adding the chance to pass is also too large.

This final option resembles the Geometric distribution ∑pq^k, but if I put in the pass/fail constraints then I calculated the series to sum up to about 1/3 instead of 1...

OK, you are on the right track: it is similar to a geometric distribution, but with the difference that the failure and success probabilities depend on the trial number, and so are not just constants ##q = 1-p## and ##p## at each trial. So, what do you think takes the place of ##q^{k-1} p##? (Go back to square 1, and ask yourself: in the geometric case, where do the ##q^k## and the ##p## come from?)
 
  • #14
whitejac said:
Suppose you take a pass-fail test repeatedly. Let Sk be the event that you are successful in your kth try, and Fk be the event that you fail the test in your kth try. On your first try, you have a 50% chance of passing the test.

P(S1)=1−P(F1)=1/2.
Assume that as you take the test more often, your chance of failing the test goes down. In particular,
P(Fk)=1/2*P(Fk-1), for k=2,3,4,⋯

However, the result of different exams are independent. Suppose you take the test repeatedly until you pass the test for the first time. Let X be the total number of tests you take, so Range(X)={1,2,3,⋯}.
Ray Vickson said:
OK, you are on the right track: it is similar to a geometric distribution, but with the difference that the failure and success probabilities depend on the trial number, and so are not just constants ##q = 1-p## and ##p## at each trial. So, what do you think takes the place of ##q^{k-1} p##? (Go back to square 1, and ask yourself: in the geometric case, where do the ##q^k## and the ##p## come from?)
Could we not say that p and q are functions? They can only be a constant probability? The geometric distribution comes from a series of Bernoulli trials, where it either passes or it does not. In my case, it passes to the tune of P(Sk) and fails to the tune of P(Fk)
 
  • #15
whitejac said:
Could we not say that p and q are functions? They can only be a constant probability? The geometric distribution comes from a series of Bernoulli trials, where it either passes or it does not. In my case, it passes to the tune of P(Sk) and fails to the tune of P(Fk)

I did not ask about Bernoulli trials, etc. I just asked for an explanation of the formula q^k * p, because once you understand that, you will be able to figure out how to adapt the formula to the new situation.
whitejac said:
Could we not say that p and q are functions? They can only be a constant probability? The geometric distribution comes from a series of Bernoulli trials, where it either passes or it does not. In my case, it passes to the tune of P(Sk) and fails to the tune of P(Fk)

So, what do you use in place of the "geometric" formula ##P(X = k) = q^{k-1} p ##? If you understand where the ##q^{k-1}## and ##p## come from, you ought to be able to find the new formula.
 
  • #16
I must be missing something critical here... I want to view these things (the summations, p and q, the placement of k's...etc) as tools for deriving a situation mathematically but I'm not seeing the relationships or something.
p is the chance to pass, q is conversly the chance to fail.
q^k-1 is the chance to fail k-1 times and multiplied by p means that the final attempt is a success (because it has no relation to k).
Now, if we're to broaden this to allow for p relative to k then we have to account for whatever value p is k times becuse it has to be factored each time we take the test. Otherwise its chances are constant and we're left with the "parent" function. We have to assume that for every failure q, there was an opportunity p for success. This opportunity grew at a rate of 1-q, strangely enough. Charting out values of k = 1, 2... gives...

p: 1-P(Fk=1), 1-P(Fk=2) ... = 1/2, 3/4 ... 1- 1/2^k
q: 0.5, 0.5P(Fk-1)... = 1/2, 1/4 ... 1/2^k

To what means can we modify with the value of k? Simply "plugging it in" does us little as the value for p shifts every test. I proposed in my head representing p as 2^k-1/2^k, but that's equivalent to 1-q and we've already established that out of the question. Subbing in the equations to pass and fail (pq) gives us a false representation of the probability because
k = 2 => P(X=1) = 1/2 (1/2 * 1) = 1/4
k=3 => P(X=2) = 15/16 (1/2 * 1/4) = 15 / 128
...
And this gets much too small for an valid probability representation.
Bringing p^k and q^k-1 gives...
(1/4)(1/4) = 1/16
(49/64)(1/16)...
again the probability is becoming much much too small and given the fact that p is ultimately a fraction...I cannot see bringing it to the pwer of k. Futhermore, that would bring it more in line with the binomial family and that's contrary to my intentions.
 
  • #17
whitejac said:
I must be missing something critical here... I want to view these things (the summations, p and q, the placement of k's...etc) as tools for deriving a situation mathematically but I'm not seeing the relationships or something.
p is the chance to pass, q is conversly the chance to fail.
q^k-1 is the chance to fail k-1 times and multiplied by p means that the final attempt is a success (because it has no relation to k).
Now, if we're to broaden this to allow for p relative to k then we have to account for whatever value p is k times becuse it has to be factored each time we take the test. Otherwise its chances are constant and we're left with the "parent" function. We have to assume that for every failure q, there was an opportunity p for success. This opportunity grew at a rate of 1-q, strangely enough. Charting out values of k = 1, 2... gives...

p: 1-P(Fk=1), 1-P(Fk=2) ... = 1/2, 3/4 ... 1- 1/2^k
q: 0.5, 0.5P(Fk-1)... = 1/2, 1/4 ... 1/2^k

To what means can we modify with the value of k? Simply "plugging it in" does us little as the value for p shifts every test. I proposed in my head representing p as 2^k-1/2^k, but that's equivalent to 1-q and we've already established that out of the question. Subbing in the equations to pass and fail (pq) gives us a false representation of the probability because
k = 2 => P(X=1) = 1/2 (1/2 * 1) = 1/4
k=3 => P(X=2) = 15/16 (1/2 * 1/4) = 15 / 128
...?
And this gets much too small for an valid probability representation.
Bringing p^k and q^k-1 gives...
(1/4)(1/4) = 1/16
(49/64)(1/16)...
again the probability is becoming much much too small and given the fact that p is ultimately a fraction...I cannot see bringing it to the pwer of k. Futhermore, that would bring it more in line with the binomial family and that's contrary to my intentions.

One more hint, then I quit.

Describe the sample space, in terms of sequences of F and S (fail and pass). In terms of pass/fail sequences, what does the sample point {X = 3} look like? Using that representation, how would you compute P(X = 3)? ALWAYS go back to the sample space when in doubt!
 
  • #18
Ray Vickson said:
One more hint, then I quit.

Describe the sample space, in terms of sequences of F and S (fail and pass). In terms of pass/fail sequences, what does the sample point {X = 3} look like? Using that representation, how would you compute P(X = 3)? ALWAYS go back to the sample space when in doubt!

Okay, I'm sorry for being so obtuse at this kind of stuff. I get set in a way of looking at something and attempt to adapt it rather than step back and reevaluate it.

Considering the sample space:
We know that this is geometric in nature, and even if we didn't we know that the aim of this test is to stop at the first success. This gives us:
S = { S, FS, FFS, FFFS,...kFS}

Charting this out as tangible probabilities gives us: {(1/2), (1/2)(3/4). (1/2)(1/4)(7/8)...} = {1/2, 3/8, 7/64...}

This is the idea that if it passes then it failed all of the times before. Now, I know this to be true from the sample space "possibilities", the given probability functions, and because this is consistently approaching 1 but not quite making it. I also ran this past my professor.

The snare here is this... I'm attempting to solve this for X=K and I've found the answer but can't figure out how to write it...
The numerator is easy: (2^k - 1) => 1, 3, 7...
The denominator is a little less so...

I broke it down as follows:
2, 8, 64, 1024...
= 2^1 , 2^3, 2^6, 2^10...
= 2^1 , (2^2)(2^1) , (2^3)(2^3) , (2^4)(2^6)...
= (2^k) , (2^k)( 2^k-1) , (2^k)( 2^k-1)( 2^k-2)( 2^k-3) ... etc

Now, I know this isn't a factorial exponent (2^k!) because it wouldn't be the same for k > 3.
Further more I know this isn't a factorial on the outside (2^k)! because then you'd get 6*5*4... and it's already wrong.

Is there a mathematical term for multiplying by the previous value rather than the subsequent integers?
My next step was to convert this into pq^k-1 format, even though it's not required for the purpose of this problem, because we discussed it to so much depth and my professor said it was possible as well as beneficial to practice the idea.
 
  • #19
whitejac said:
Okay, I'm sorry for being so obtuse at this kind of stuff. I get set in a way of looking at something and attempt to adapt it rather than step back and reevaluate it.

Considering the sample space:
We know that this is geometric in nature, and even if we didn't we know that the aim of this test is to stop at the first success. This gives us:
S = { S, FS, FFS, FFFS,...kFS}

Charting this out as tangible probabilities gives us: {(1/2), (1/2)(3/4). (1/2)(1/4)(7/8)...} = {1/2, 3/8, 7/64...}

This is the idea that if it passes then it failed all of the times before. Now, I know this to be true from the sample space "possibilities", the given probability functions, and because this is consistently approaching 1 but not quite making it. I also ran this past my professor.

The snare here is this... I'm attempting to solve this for X=K and I've found the answer but can't figure out how to write it...
The numerator is easy: (2^k - 1) => 1, 3, 7...
The denominator is a little less so...

I broke it down as follows:
2, 8, 64, 1024...
= 2^1 , 2^3, 2^6, 2^10...
= 2^1 , (2^2)(2^1) , (2^3)(2^3) , (2^4)(2^6)...
= (2^k) , (2^k)( 2^k-1) , (2^k)( 2^k-1)( 2^k-2)( 2^k-3) ... etc

Now, I know this isn't a factorial exponent (2^k!) because it wouldn't be the same for k > 3.
Further more I know this isn't a factorial on the outside (2^k)! because then you'd get 6*5*4... and it's already wrong.

Is there a mathematical term for multiplying by the previous value rather than the subsequent integers?
My next step was to convert this into pq^k-1 format, even though it's not required for the purpose of this problem, because we discussed it to so much depth and my professor said it was possible as well as beneficial to practice the idea.

Congratulations, you are on the right track at last----that sample space point of view really works! Anyway, if
[tex] Q_j = \frac{1}{2} \cdot \frac{1}{2^2} \cdots \frac{1}{2^j}, [/tex]
we have
[tex] P(X = k) = Q_{k-1} \left( 1 - \frac{1}{2^k} \right) = Q_{k-1} - Q_{k}. [/tex]
This expression is handy, because from it we can get
[tex] \begin{array}{rcl} P(X \leq k) &=& P(X = 1) + P(X = 2) + \cdots + P(X = k)\\
&=& (1 - Q_1) + (Q_1 - Q_2) + (Q_2 - Q_3) + \cdots + (Q_{k-1} - Q_k)\\
&= &1 - Q_k
\end{array}
[/tex]
because all the intermediate terms ##Q_1, Q_2, \ldots, Q_{k-1}## cancel, to leave only the first and last term.

Of course you can find ##Q_j##, because
[tex] Q_j = \frac{1}{2^1} \frac{1}{2^2} \cdots \frac{1}{2^j} = \frac{1}{2^{1+2+ \cdots + j}}, [/tex]
and the sum of integers ##1 + 2 + \cdots + j## is well-known from any elementary algebra textbook.
 
Last edited:
  • #20
Thank you! That's an elegant way of writing pq^k.
I found P(X=k) to be (2^2k-1) / (2^k(k+1)/2).
 
  • #21
whitejac said:
Thank you! That's an elegant way of writing pq^k.
I found P(X=k) to be (2^2k-1) / (2^k(k+1)/2).

Your lack of parentheses makes it hard to know exactly what you mean, but I guess you mean ##(2^{2k} - 1)/2^{k(k+1)/2}##. Anyway, I found ##P(X=k)## to be ##(2^k - 1)/ 2^{k(k+1)/2}##, which has a significantly different numerator from yours. In fact: for k = 1,2,3,4 your formula gives invalid "probabilities" 3/2, 15/8, 63/64, 255/1024, whereas mine gives correct probabilities 1/2, 3/8, 7/64, 15/1024.

BTW: I don't understand your obsession with trying to write things like pq^k. In this problem the form pq^k just does not apply, so worrying about how to do the impossible seems to me a waste of time and effort.
 
  • #22
oh... I must've made a typo or didn't check my work very well. I don't know why I said 2k-1 in the exponent.

My obsessions were really just me trying to find relationships between general things and the specific things. We were taught a list of common distributions that were subsequent derivations of the prior example (bernoulli trials to geometric, and then to binomials etc). My professor used a similar logic, I think, in seeing how it appeared to play and then manipulating that starting ground to where he wanted it to go. So I wanted to try this as well because if it's "geometric" in nature, then it should follow that it relatess in a similar way. Then, when I asked my professor about putting it in pq^(n-1) form, he said that he could and implied it'd be a good practice if I wanted to further my understanding of simplifying series.

Also, in the beginning, I just thought that's how it needed to end up. Then, once I found out it wasn't necessary, it just became a thing to attempt. I still view the common distributions like parent functions in nature, even if that may be naive.
 
  • #23
whitejac said:
oh... I must've made a typo or didn't check my work very well. I don't know why I said 2k-1 in the exponent.

My obsessions were really just me trying to find relationships between general things and the specific things. We were taught a list of common distributions that were subsequent derivations of the prior example (bernoulli trials to geometric, and then to binomials etc). My professor used a similar logic, I think, in seeing how it appeared to play and then manipulating that starting ground to where he wanted it to go. So I wanted to try this as well because if it's "geometric" in nature, then it should follow that it relatess in a similar way. Then, when I asked my professor about putting it in pq^(n-1) form, he said that he could and implied it'd be a good practice if I wanted to further my understanding of simplifying series.

Also, in the beginning, I just thought that's how it needed to end up. Then, once I found out it wasn't necessary, it just became a thing to attempt. I still view the common distributions like parent functions in nature, even if that may be naive.

I think you mis-understood what I was saying: of course, in this example, the LOGIC is the same as for Bernoulli trials and geometric, etc., but the final outcome is just plain different---end of story. In both cases you work out P(X = k) from the probability of a sample-point FFF...FS, but that is as far as it goes. In one case you get q^(k-1) p, but in the other case you get ##q_1 q_2 q_3 \cdots q_{k-1} p_k##. And, yes indeed, this is a product of (k-1) q's followed by a 'p', but the q's are all different and the 'p' depends on the trial number in the present example. (However, maybe I mis-understood YOUR intentions: maybe what I just said was actually what you meant, but while saying something else.)
 
  • #24
Oh yes! That's what I meant. I didn't say the sum of the q's and p's and of course for this problem q and p are functions dependant on k, so maybe my laziness and assumptions caused some confusion. I was mostly trying to simplify more than necessary to emphasize the logic/form it related to so I had a tether between what I knew and where I thought I was going. Also, I wanted to test and see how the 'geometric trial' worked when the probabilities weren't constant like every example given was, and see if I could derive it from or back to that idea. To me, that would be an important tactic to learn early on as opposed to simply working the problem based on the formulas and standards already given. I really appreciate the patience you put into this problem with me and I feel like I have at least equal if not better understanding of this concept now compared to my classmates.
 

Related to Conditional PDF question -- I think anyway....

1. What is a conditional PDF?

A conditional PDF (Probability Density Function) is a statistical concept that represents the probability distribution of a random variable given that another variable has a specific value or falls within a specific range. It is used to calculate the likelihood of an event occurring under certain conditions.

2. How is a conditional PDF different from a regular PDF?

A regular PDF represents the probability distribution of a single random variable, while a conditional PDF represents the probability distribution of a random variable given that another variable has a specific value or falls within a specific range. It takes into account additional information or conditions that affect the probability of an event.

3. How is a conditional PDF calculated?

A conditional PDF is calculated by dividing the joint probability of the two variables by the probability of the condition. In mathematical notation, it can be expressed as P(X|Y) = P(X,Y) / P(Y), where X is the event of interest and Y is the condition.

4. What is the purpose of using a conditional PDF?

The purpose of using a conditional PDF is to better understand the relationship between two variables and to make more accurate predictions or decisions based on this information. It allows for a more nuanced analysis by taking into account the effect of certain conditions on the probability of an event.

5. Can a conditional PDF be used for any type of data?

Yes, a conditional PDF can be used for any type of data as long as the variables have a defined relationship and the conditions are well-defined. It is commonly used in fields such as statistics, mathematics, and data science to analyze and model complex systems with multiple variables.

Similar threads

  • Precalculus Mathematics Homework Help
Replies
14
Views
2K
  • Precalculus Mathematics Homework Help
Replies
2
Views
1K
  • Precalculus Mathematics Homework Help
Replies
12
Views
2K
  • Precalculus Mathematics Homework Help
Replies
1
Views
1K
Replies
10
Views
3K
  • Precalculus Mathematics Homework Help
Replies
4
Views
2K
  • Precalculus Mathematics Homework Help
Replies
3
Views
1K
  • Precalculus Mathematics Homework Help
Replies
5
Views
5K
  • Precalculus Mathematics Homework Help
Replies
4
Views
760
  • Precalculus Mathematics Homework Help
Replies
34
Views
3K
Back
Top