What Is the Probability Function of Z When X~Bernoulli(θ) and Y~Geometric(θ)?

If not, post what you have and I'll try to help.i understood that, thanks so much, you were a big help!In summary, the probability function of Z is the convolution of the probability distributions of X and Y. This can be expressed as a summation from -inf to inf, where k is the dummy variable, and z represents the desired value. The Kronecker delta function and the unit step function are used to simplify the expressions for the Bernoulli and Geometric distributions, making it easier to write a one-line expression for the convolution. It is important to keep track of which variable is the dummy variable and which one represents the desired value in order to correctly apply the convolution formula.
  • #1
sneaky666
66
0

Homework Statement


Let X~Bernoulli(θ) and Y~Geometric(θ), with X and Y independent. Let Z=X+Y. What is the probability function of Z?


Homework Equations





The Attempt at a Solution



I am getting
PX(1) = θ
PX(0) = 1-θ
PX(x) = 0 otherwise
pY(y) = θ(1-θ)^y for y >= 0
pY(y) = 0 otherwise


not sure where to go from here...
 
Physics news on Phys.org
  • #2
Do you know this result? The probability distribution of [itex]Z[/itex] is the convolution of the probability distributions of [itex]X[/itex] and [itex]Y[/itex].
 
  • #3
no i am not sure, which is why i need help.
 
  • #4
There's a quick proof of the convolution result at the top of this PDF file:

http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter7.pdf

That should get you started. If you get stuck, post what you have and I'll try to help.
 
  • #5
i did that research and the only thing i could come up with is

PX(X=1) = θ
PX(X=0) = 1-θ
PX(X=x) = 0 otherwise
PY(Y=y>=0) = θ(1-θ)^y
PY(Y=y) = 0 otherwise

so (X=k) and (Y=z-k) since Z = X+Y

PZ(Z=z)=

summation from -inf to inf
θ^2 * (1-θ)^(z-1)
if x=1,y=1

summation from -inf to inf
θ * (1-θ)^(z+1)
if x=0,y=0

0
otherwise


Is this right?
 
  • #6
I don't think that's quite right. I am going to introduce some notation to make it easier to express the functions:

Kronecker delta function

[tex]\delta(k) = \begin{cases}
1, & k = 0 \\ 0, & \textrm{otherwise} \end{cases}[/tex]

Unit step function

[tex]u(k) = \begin{cases}
1, & k \geq 0 \\ 0, & \textrm{otherwise} \end{cases}[/tex]

Bernoulli distribution

[tex]b(k) = (1-\theta)\delta(k) + \theta \delta(k-1)[/tex]

Geometric distribution

[tex]g(k) = \theta (1-\theta)^k u(k)[/tex]

Distribution of sum of independent bernoulli and geometric

[tex]\begin{align*}
s(k) &= \sum_{m=-\infty}^{\infty} g(m) b(k-m) \\
&= \sum_{m = 0}^{\infty} \theta(1 - \theta)^m [(1-\theta)\delta(k-m) + \theta \delta(k-m-1)]
\end{align*}[/tex]

You can now consider three cases:

1) [itex]k < 0[/itex]: the sum is zero
2) [itex]k = 0[/itex]: only one of the two [itex]\delta[/itex] functions is nonzero for some [itex]m \geq 0[/itex]
3) [itex]k > 0[/itex]: both [itex]\delta[/itex] functions are nonzero for some [itex]m \geq 0[/itex]
 
  • #7
ok i see, but i looked on wikipedia and for the bounoulli dist. and geometric dist. i thought it was

PX(X=1) = θ
PX(X=0) = 1-θ
PX(X=x) = 0 otherwise
PY(Y=y>=0) = θ(1-θ)^y
PY(Y=y) = 0 otherwise

or is this basically what you have? and why did you also add those extra functions in them?, I don't understand how your getting those functions from what i have...
 
  • #8
sneaky666 said:
ok i see, but i looked on wikipedia and for the bounoulli dist. and geometric dist. i thought it was

PX(X=1) = θ
PX(X=0) = 1-θ
PX(X=x) = 0 otherwise
PY(Y=y>=0) = θ(1-θ)^y
PY(Y=y) = 0 otherwise

or is this basically what you have? and why did you also add those extra functions in them?, I don't understand how your getting those functions from what i have...

Yes, your functions and mine are equivalent. I added the extra functions so I don't have to express them as individual cases (x = 1, x = 0, otherwise) as you did.

To see that mine are the same as yours, just plug in various values of [itex]k[/itex].

For example, if

[tex]b(k) = (1 - \theta)\delta(k) + \theta\delta(k-1)[/tex]

then notice that [itex]\delta(k)[/itex] is zero except when [itex]k = 0[/itex], and [itex]\delta(k-1)[/itex] is zero except when [itex]k = 1[/itex].

Thus [itex]b(0) = (1 - \theta)(1) + 0 = (1 - \theta)[/itex] and [itex]b(1) = 0 + \theta(1) = \theta[/itex] and [itex]b(k) = 0[/itex] if [itex]k[/itex] is neither 0 nor 1.

Similarly with the geometric distribution.

The point is that it makes it possible to write a one-line expression that is valid for all [itex]k[/itex], which in turn makes it easier to express the convolution sum.

By the way, this isn't some weird invention of mine - it's a standard thing to do when working with functions defined in pieces, and the notation ([itex]\delta(k)[/itex] and [itex]u(k)[/itex]) are quite standard as well.
 
  • #9
i have one last concern about why my answer is wrong:

since the main equation is
[tex]\sum[/tex] P(X=k)P(Y=z-k) for all k
inf on top
k=-inf on bot

so how I got my answers is because of
PX(X=1) = θ
PX(X=0) = 1-θ
PX(X=x) = 0 otherwise
PY(Y=y>=0) = θ(1-θ)^y
PY(Y=y) = 0 otherwise

i have 3 cases, if X=k is 0, 1, or something else
so if k = 0 then you would have
P(X=k)P(Y=z-k)
P(X=0)P(Y=z-0)
(1-θ)P(Y=z)
(1-θ)θ(1-θ)^z
θ(1-θ)^(z+1)

so if k = 1 then you would have
P(X=k)P(Y=z-k)
P(X=1)P(Y=z-1)
θθ(1-θ)^(z-1)
θ^2(1-θ)^(z-1)

so if k != 0,1 then you would have
P(X=k)P(Y=z-k)
0*P(Y=z-k)
0

(and of course the summation beside them, i didnt add it here)

So i don't understand what is wrong here?
 
  • #10
You are mixing up your [itex]k[/itex] and [itex]z[/itex]. One of them is a dummy variable used in the summation, and the other one is the letter that you use to fill in the blank:

P(X + Y = ____)

So let's pick which one is which and stick with it.

If you want to fill in the blank with z,

[tex]P(X + Y = z) = \sum_{k=-\infty}^{\infty} P(X = k) P(Y = z - k)[/tex]

then [itex]k[/itex] is the dummy variable in the sum (it doesn't appear on the left side at all). So your three cases apply to [itex]z[/itex], not [itex]k[/itex]:

Case 1: z < 0
Case 2: z = 0
Case 3: z > 0

Try that and see if it helps.
 

FAQ: What Is the Probability Function of Z When X~Bernoulli(θ) and Y~Geometric(θ)?

What is a random variable?

A random variable is a numerical quantity whose value is determined by the outcome of a random event. It can take on different values with certain probabilities associated with each value.

What does it mean for two random variables to be independent?

Two random variables are independent if the occurrence of one event does not affect the probability of the other event. In other words, the outcome of one variable has no influence on the outcome of the other variable.

How can you test for independence between two random variables?

The most common way to test for independence between two random variables is by calculating their covariance and correlation. If the covariance is 0, the variables are independent. If the correlation coefficient is 0, the variables are uncorrelated, but not necessarily independent.

Why is independence between random variables important?

Independence between random variables is important because it allows us to simplify complex probability calculations. When two variables are independent, their joint probability distribution can be expressed as the product of their individual probability distributions.

Can two dependent random variables be treated as independent?

No, two dependent random variables cannot be treated as independent. This is because the outcome of one variable affects the outcome of the other, so their joint probability distribution cannot be expressed as the product of their individual distributions.

Similar threads

Back
Top