How to measure the 'agreement' between two assays?

  • I
  • Thread starter lavoisier
  • Start date
  • Tags
    Measure
In summary: PIN > 50%, for example'.In summary, the conversation discusses the process of selecting and comparing two assays, A and B, for measuring the same property of molecules. The plan is to test 100000 molecules by running either A or B on all of them, as running both is too expensive. The goal is to determine if there is enough agreement between A and B, and if so, to use A due to its lower cost. The plan is to select a random subset of 5000 molecules and run both A and B on them to compare the results. Various statistical tests, such as the concordance correlation coefficient and rank correlation coefficient, are discussed as potential methods for analyzing the results. The
  • #36
Hey lavoisier.

I'm wondering whether you have studied clinical trials and bio-statistics because this is precisely the kind of thing that this field looks at.

It's going to help to know how much you know about this field before giving more advice but one thing I feel I should do is direct your attention to clinical trials and the results embodied within them if you aren't aware of this stuff.

Clinical trials often looks at treatment "delta's" and doing inference and regression on them along with coming up with models to find the optimal number of tests to enforce statistical power (Type I/II errors).

There are a number of considerations including crossover trials (order of trials has impact on distribution) as well as conditional power (i.e. the power of the test changes conditionally on successes and failures of prior results). This is applied to clinical trials which look at biological phenomena and the thing about this is that the assay example has a lot of the same characteristics that would be considered within a normal "clinical trial".
 
  • Like
Likes WWGD and EnumaElish
Physics news on Phys.org
  • #37
Hi chiro,
no, I haven't studied that, I am a (medicinal) chemist by training and I've just moved to chemoinformatics and modelling, so I'm trying to learn these things.

In my company there are people who do biostatistics, I am indeed planning to talk to one of them shortly.

We once had a statistician visiting the company as a consultant. He talked about clinical trials and the need to appreciate the difference between significance and power of a test. It's all very well to have a significant result, but he showed that then sometimes the number or subjects tested is insufficient to avoid the other type of error. And if I understand correctly, increasing one decreases the other.
Tough stuff, especially because this concept is not always taught clearly to you at uni, if you do a degree other than pure maths or statistics.
Not helped by the fact that while significance is relatively easy to calculate, using the well known normal distribution formulae, power requires some more involved maths. Nothing incredibly hard, by the look of it, but I wonder how many non-statisticians/mathematicians know how to do this.
 
  • #38
The idea he is talking of is that if you want to increase one (or decrease one) then it will have an opposite impact on the other since making one test better (null or alternative) impacts the ability to check the other hypothesis. You can get this relationship by examining the probabilities of getting a successful conclusion and the probabilities of getting false negatives with respect to false positives.

The other thing is that the way these probabilities change (Type I/II errors) is quite involved in clinical trials - much more so than normal statistics. They have to add constraints and take into account all sorts of clinical trial constraints (including ethics) so that the test that is done is the minimum in terms of getting some significance level.

When you do speak to them you should ask what sort of models in clinical trials would be appropriate and how that translates into mathematical constraints, test statistics and of course the significance level with respect to the different hypotheses - if they are well trained they should be able to tell you that.
 
  • Like
Likes WWGD
  • #39
lavoisier said:
the need to appreciate the difference between significance and power of a test.

Introducing "power" is the only way to make sense of non-Bayesian statistics.

For example, suppose your are testing whether the mean of population A is different that the mean of population B and your test statistic X is the difference in sample means from the two populations. The natural procedure is to choose an "acceptance" region for the null hypothesis that is some interval containing zero. Your choice of p-value determines the size of this region.

But what is the logical justification for (the intuitively obvious) procedure of making the acceptance region for the null hypothesis an interval containing zero ? Why not define the acceptance region as some collection of disjoint intervals scattered about the the real number line ? For example, if the desired significance level is ##\alpha = 0.5 ## why not pick any old set of intervals so that the probability that X lands in one of them is 0.95. We could even omit any interval containing zero and still find other intervals whose probabilities add to 0.95.

Saying that the purpose of an acceptance region is to specify a set where X is likely to land if the null hypothesis is true doesn't explain why the acceptance region should be a single interval that contains the number zero instead of, say, 10 disjoint intervals, none of which contains zero. There are lots of different ways to pick a bunch of intervals whose total probability is 0.95.
 
Last edited:
  • #40
Just following on from what Stephen Tashi mentioned - you can represent power with conditional probabilities and it will make more sense using the Type I/II errors to understand not only how they impact each other but how they are optimized.

Usually (and I say usually) it's a function of the sample size in a simple way but it can get complicated and in clinical trials this is "analyzed" very thoroughly because of how expensive clinical trials and also because of things like ethics committees making sure you don't do what's unnecessary when biology is involved.

For reference - you are looking at the term P(H1 is actually true|You picked H1 to be true) when power is involved.
 

Similar threads

Replies
5
Views
2K
Replies
13
Views
2K
Replies
30
Views
3K
Replies
21
Views
3K
Replies
1
Views
1K
Replies
7
Views
2K
Back
Top