Probability distribution comparison

In summary, you could use a regression to assess whether two probability distributions are identical.
  • #1
pisaster
5
0
I am trying to compare two probability distributions. I tried the chi-square test, but that requires binned data, and all I have is probability. It seems to work if I ignore the rules about degrees of freedom and just use df=1, but I doubt this is statistically valid. I tried to 'unnormalize' my probability to approximate bins but that is also not working. Are there any tests meant to compare normalized probability functions?
 
Physics news on Phys.org
  • #2
What is the goal that you are trying to accomplish?
 
  • #3
I am doing biased molecular dynamics simulations. Since the method for unbiasing the data gives a probability curve, that is what I have to compare rather than binned data. I am trying to show statistically that the probability curve from a new method looks like the probability curve from the older, more computationally expensive method. Visual comparison supports this hypothesis, but I would like to have some mathematical way of showing this so that I can make a plot of simulation time versus some measure of similarity to the expected probability. I can do this using relative (but wrong) chi square values, but I would rather have something which I can justify mathematically.
 
  • #4
You could divide each distribution's domain into "bins" (frequency ranges) and use the test that way. A number of non-parametric tests can also be used, e.g. the runs test. See this thread.

P.S. From that earlier thread:
Originally posted by EnumaElish
There are several non-parametric tests for assessing whether 2 samples are from the same distribution. For example, the "runs" test. Suppose the two samples are [itex]u_1<...<u_n[/itex] and [itex]v_1<...<v_n[/itex]. Suppose you "mix" the samples. If the resulting mix looks something like [itex]u_1< v_1 < u_2 < u_3 < u_4 < v_2 < v_3 <[/itex] ... [itex] < u_{n-1} < v_{n-1} < v_n < u_n[/itex] then the chances that they are from the same distribution is greater than if they looked like [itex]u_1<...<u_n<v_1<...<v_n[/itex]. The latter example has a smaller number of runs (only two: first all u's then all v's) than the former (at least seven runs: one u, one v, u's, v's, ..., u's, v's, one u). This and similar tests are usually described in standard probability textbooks like Mood, Graybill and Boes.
 
Last edited:
  • #5
Perhaps I should clarify, I have a series of 160 points along my reaction coordinate and a probability value for each. I tried to, in essence, reverse the normalization to create an approximation of binned data, but the probability curve was produced from over 500,000 data points. When I run the chi square test on unnormalized bins, the result is a value in the tens of thousands, which with 159 degrees of freedom gives a probabilty that the bins are equal to the expected of about 0 for a distribution which actually looks quite like the expected. I know chi square is sensitve to the binning of data, but because of the way the unbiasing method works, I cannot get a very large number of bins. Is there any test that is specific to comparing probabilities, or is there some way to use chi square on probabilities without unnormalizing?
 
  • #6
Am I right to think that you have two "continuous" distributions that you have simulated, and you'd like to prove that they are identical?
 
  • #7
yes, or more specifically I would like to show how close to identical they are
 
  • #8
Ideas:

1. You could make two variables X(t) = value of the "true" disrtibution (expensive simulation) at point t and Y(t) = value of the alternative dist. (practical simulation) at point t. Then run the regression Y(t) = a + b X(t) for as many t's as you can (or like), then show that the joint hypothesis "(a = 0) AND (b = 1)" is highly statistically significant.

2. Plot X(t) and Y(t) on the same graph. Select a lower bound T0 and an upper bound T1. Let's assume X(T0) = Y(T0) and X(T1) = Y(T1), i.e. both T0 and T1 are crossing points. Divide the interval [T0,T1] into arbitrary subintervals {s(1),...,s(N)}. Define string variable z(i) = "x" if the integral of X(t) - Y(t) > 0 over subinterval s(i); z(i) = "y" otherwise. You'll end up with a string like xxxyyyxyxyx... whose length = N. Now apply the RUNS TEST that I described above.

I may post again if I can think of anything else.
 
  • #9
thank you :smile:
 
  • #10
N.B. The "runs test" addresses the directionality of the error e(t) = X(t) - Y(t); the regression addresses the magnitude of the errors. Technically, the regression minimizes the sum of e(t)2 = sum of [X(t) - Y(t)]2 over all t in the sample. Ideally one should apply both techniques to cover the directionality as well as the magnitude of the errors.
 
Last edited:

FAQ: Probability distribution comparison

What is a probability distribution?

A probability distribution is a mathematical function that shows the possible outcomes and the likelihood of those outcomes occurring in a given event or experiment.

How do you compare two probability distributions?

Two probability distributions can be compared by looking at their shape, central tendency, and spread. This can be done visually by plotting the distributions on a graph or through statistical tests such as the Kolmogorov-Smirnov test or the Chi-square test.

What is the purpose of comparing probability distributions?

Comparing probability distributions allows us to understand the similarities and differences between different sets of data. This can help us make decisions, identify patterns, and draw conclusions about the underlying process that generated the data.

Can probability distributions with different shapes be compared?

Yes, probability distributions with different shapes can be compared. However, it is important to note that different shapes may indicate different underlying processes and may require different methods for comparison.

What are some common methods used for probability distribution comparison?

Some common methods used for probability distribution comparison include visual inspection, statistical tests, and measures of similarity such as the Kullback-Leibler divergence or the Jensen-Shannon divergence.

Similar threads

Replies
5
Views
356
Replies
9
Views
1K
Replies
11
Views
2K
Replies
7
Views
443
Replies
5
Views
3K
Replies
3
Views
2K
Replies
30
Views
3K
Replies
4
Views
3K
Back
Top