Random variable and probability density function

In summary: In general, you can use this formula whether ##g## is one-to-one or not. The previous two equations are just special cases of it. The first equation applies when ##g## is one-to-one and increasing, the second when ##g## is one-to-one and decreasing.
  • #1
PainterGuy
940
70
Homework Statement
The problem is about continuous random variable and probability density function.
Relevant Equations
Please check the posting.
Hi,

I was trying to solve the attached problem which shows its solution as well. I cannot understand how and where they are getting the equations 3.69 and 3.69A from.

Are they substituting the values of θ₁ and θ₂ into Expression 1 after performing the differentiation to get equations 3.70 and 3.71?

As you can see that the solution is there but I need to understand it. I'd really appreciate if you could help me with it. Thank you!

Hi-resolution image copy: https://imagizer.imageshack.com/img923/1491/lXm2aZ.jpg
 

Attachments

  • prob_88.jpg
    prob_88.jpg
    23.9 KB · Views: 129
  • Like
Likes Delta2
Physics news on Phys.org
  • #2
I am not familiar with the notation
[tex]f_y(y)[/tex]
I see it is a function with variable y, but what does y in ##f_y## mean ? It seems ##f_y(y)## is defined somewhere before the problem in your text.
 
  • #3
anuttarasammyak said:
I am not familiar with the notation
[tex]f_y(y)[/tex]
I see it is a function with variable y, but what does y in ##f_y## mean ? It seems ##f_y(y)## is defined somewhere before the problem in your text.
It means probability density function of 'f()' where "Y" is a random variable and "y" is a dummy variable, i.e. f_Y(y).
 
  • #4
Thanks. So I observe correspondence of areas for distribution of y and ##\theta## as attached.

210925.JPG
 
Last edited:
  • Like
Likes PainterGuy
  • #5
My apologies but I still don't get how and where they are getting the equations 3.69 and 3.69A from.
 
  • #6
[tex]f_y(y)dy = f_\theta(\theta) d\theta|_{\theta_1}+f_\theta(\theta) d\theta|_{\theta_2}[/tex] or more strictly on sign
[tex]f_y(y)|dy| = f_\theta(\theta) |d\theta||_{\theta_1}+f_\theta(\theta) |d\theta||_{\theta_2}[/tex]
Divide the both sides by |dy|.
 
Last edited:
  • Like
Likes PainterGuy
  • #7
What is [itex]f_\theta(\theta) d\theta[/itex] and where does this come from? I don't see its connection with the original function y=cos(α+θ). Could you please help?
 
  • #8
[tex]f_\theta(\theta)\ d\theta|_{\theta_1}[/tex]
is probability of ##\theta## to take value ## [\ \theta_1-\frac{1}{2} d\theta,\ \theta_1+ \frac{1}{2}d\theta\ ]##
It comes from the graph of y-##\theta## correspondence as I have attached.
 
  • Like
Likes PainterGuy
  • #9
anuttarasammyak said:
[tex]f_\theta(\theta)\ d\theta|_{\theta_1}[/tex]
is probability of ##\theta## to take value ## [\ \theta_1-\frac{1}{2} d\theta,\ \theta_1+ \frac{1}{2}d\theta\ ]##
It comes from the graph of y-##\theta## correspondence as I have attached.
Thank you but, honestly speaking, I'm still having a hard time. I think I should give it few hours, perhaps it'd make sense.
 
  • Like
Likes Delta2
  • #10
If you have a random variable ##X##, you can have a function of that variable ##g(X)## which defines a new random variable ##Y=g(X)##. ##X## has a pdf ##f_X(x)##; similarly, the variable ##Y## has a pdf ##f_Y(y)##.

Assume for a moment that ##g(x)## monotonically increases. The function ##g## maps the interval ##(x,x+dx)## to the interval ##(y,y+dy)##, so the probability that ##X## is between ##x## and ##x+dx## should be equal to the probability ##Y## is between ##y## and ##y+dy##. In terms of the pdfs, you have
$$f_X(x)\,dx = f_Y(y)\,dy.$$ Since ##dy = g'(x)\,dx##, you get ##f_Y(y) = f_X(x)/g'(x)##.

If ##g## were a decreasing function, however, we'd run into a problem as the pdfs need to be non-negative. In that case, we should have ##f_Y(y) = -f_X(x)/g'(x)## instead. To cover both cases, we use the absolute value of ##g'(x)##.

Finally, if we drop the requirement that ##g## is a one-to-one function, then it's possible that more than one value of ##x## maps to a particular value of ##y##. In that case, we need to sum the probabilities to get
$$f_Y(y)\,dy = \sum_{g(x_i)=y} f_X(x_i)\,dx \quad \Rightarrow \quad f_Y(y) = \sum_{g(x_i)=y} \frac{f_X(x_i)}{\lvert g'(x_i) \rvert}.$$ This formula applied to this particular problem is where equation 3.69 comes from.
 
  • Like
Likes PainterGuy
  • #11
Thank you very much!

vela said:
Since ##dy = g'(x)\,dx##, you get ##f_Y(y) = f_X(x)/g'(x)##.

In that case, we need to sum the probabilities to get
$$f_Y(y) = \sum_{g(x_i)=y} \frac{f_X(x_i)}{\lvert g'(x_i) \rvert}.$$

Isn't the last expression more general? I mean it's applicable even if g is one-to-one function or not.
 
Last edited:
  • #12
PainterGuy said:
Isn't the last expression more general? I mean it's applicable even if g is one-to-one function or not.
Yes, that's what the last paragraph of my previous post said.
 
  • Like
Likes PainterGuy

FAQ: Random variable and probability density function

What is a random variable?

A random variable is a numerical value that is determined by chance or randomness. It is used to represent the outcomes of a random experiment or process.

What is the difference between a discrete and continuous random variable?

A discrete random variable can only take on a finite or countably infinite number of values, while a continuous random variable can take on any value within a certain range.

What is a probability density function (PDF)?

A probability density function is a mathematical function that describes the probability distribution of a continuous random variable. It gives the probability of a random variable falling within a certain range of values.

How is a PDF related to a cumulative distribution function (CDF)?

The CDF is the integral of the PDF, and it gives the probability that a random variable is less than or equal to a certain value. In other words, the CDF is the cumulative sum of the probabilities from the PDF.

How is the mean and variance of a random variable calculated from its PDF?

The mean of a random variable is calculated by taking the weighted average of all possible values of the random variable, with the weights being the probabilities from the PDF. The variance is calculated by taking the weighted average of the squared differences between each value and the mean, again with the weights being the probabilities from the PDF.

Back
Top