Testing Hypotheses with Bernoulli Distribution

In summary, the conversation discusses a test procedure for a random sample from the Bernoulli Distribution with unknown parameter P. The procedure involves rejecting the null hypothesis when Xbar is less than a specified value c, and the value of c can be determined by comparing probabilities under the null hypothesis. For a test of exact size α, a randomized test can be used, and for large n, the normal approximation can be applied.
  • #1
LBJking123
13
0
This is the question:
Suppose that X1...Xn form a random sample from the Bernoulli Distribution with unknown parameter P. Let Po and P1 be specified values such that 0<P1<Po<1, and suppose that is desired to test the following simple hypotheses: Ho: P=Po, H1: P=P1.
A. Show that a test procedure for which α(δ) + β(δ) is a minimum rejects Ho when Xbar < c.
B. Find the value of c.

I know that this problem is not that difficult I just can't figure out where to start. I know the Bernoulli distribution, but I can't figure out how to get α(δ) and β(δ). I have not seen any problems like this so I am kinda lost, any help would be much appreciated. Thanks!
 
Physics news on Phys.org
  • #2
You didn't say what [itex] \delta
[/itex] is.
 
  • #3
ƩX ~ Binomial (n,p0) under null hyp (H). You know the probabilities of X=0,1, ...,n under H. Find the value of X (= c, say) such that P[X≤c-1] < α ≤ P[X≤c]. Now compare your observed value of X with c.
If you want a test of exact size α, then a randomized test is to be done.
For large n you can use normal approximation due to De Moivre Laplace limit theorem.
 
Last edited:

Related to Testing Hypotheses with Bernoulli Distribution

What is the Bernoulli distribution?

The Bernoulli distribution is a discrete probability distribution that represents the outcomes of a single experiment with two possible outcomes: success (usually denoted as 1) and failure (usually denoted as 0). It is used to model situations where there are only two possible outcomes, such as flipping a coin, rolling a die, or conducting a yes or no experiment.

How is the Bernoulli distribution related to hypothesis testing?

In hypothesis testing, the Bernoulli distribution is used to test a statistical hypothesis about the probability of success or failure in a population. It is often used to determine whether there is a significant difference between two groups or to compare a sample to a known population.

What are the assumptions of using the Bernoulli distribution for hypothesis testing?

The main assumption of using the Bernoulli distribution for hypothesis testing is that the outcomes of the experiment are independent and identically distributed (i.i.d). This means that the probability of success or failure does not change from one trial to another, and the trials are not influenced by previous outcomes.

How do you calculate the probability of success in the Bernoulli distribution?

The probability of success in the Bernoulli distribution is denoted as p and is usually determined by the researcher based on prior knowledge or assumptions. It can also be estimated from a sample using the formula p = number of successes / total number of trials. This estimated value can then be used in hypothesis testing.

What is an example of using the Bernoulli distribution for hypothesis testing?

An example of using the Bernoulli distribution for hypothesis testing is determining whether a new medication is effective in treating a certain condition. The hypothesis would be that the medication has a success rate of 50%, and a sample of patients would be tested to see if the success rate is significantly different from 50%. The outcomes of each patient (success or failure) would follow a Bernoulli distribution, and the calculated probability of success would be used in the hypothesis testing process.

Similar threads

Back
Top