The Cass Report (UK)

  • Thread starter Lynch101
  • Start date
  • Tags
    Report Uk
  • #1
Lynch101
Gold Member
768
85
Just wondering if anyone here is familiar with the Cass Report, which was recently published in the UK?

I've been reading a lot of different stuff on it, including criticisms of it's methodology. I was wondering if anyone here might have any insights into the report itself or the criticisms of it?
 
Biology news on Phys.org
  • #2
I do wonder what the "Cass Report" might be about?
 
  • Like
Likes BillTre
  • #3
Report of the Cass Review

Hillary Cass conducted a review of the state of gender medicine in the UK, particularly with regard to children (so, roughly speaking, a review of the processes and decisions around deciding if a child is "really trans" and if so, potentially proceeding to treatment with hormones and surgery). My understanding is that the conclusion was that the field is poorly evidenced, with many studies of "gender questioning" children being methodologically flawed. I think the criticisms boil down to arguments about whether "there's loads of evidence that Cass ignored", or "there's loads of worthless pseudo-science that Cass correctly discarded".

The report has led to an abrupt change in practice in England and Scotland, with the NHS in both countries announcing they've stopped prescribing puberty blockers in this context within days of the final publication. So it's quite politically charged. Given the general polarisation level of the whole "boys can be girls too" debate and the number of agendas in play here I'd be a bit wary of any opinion, to be honest. If I wanted to investigate, I would tend to actually review a paper or two myself and see if the report's criticisms are accurate.

Whether discussing reviews of such studies falls within PF's remit or sails too close to the political sphere I can't judge.
 
Last edited:
  • Like
Likes pinball1970
  • #4
Ibix said:
Whether discussing reviews of such studies falls within PF's remit or sails too close to the political sphere I can't judge.
Yeah, I'll close this thread temporarily for Mentor review. If the thread is allowed, it may be moved to the Medical forum.
 
  • Like
Likes pinball1970, gmax137, Lynch101 and 1 other person
  • #5
Thread reopened after Mentor discussion and moved to Medical forum.
Lynch101 said:
I was wondering if anyone here might have any insights into the report itself or the criticisms of it?
Please keep the discussion confined to this topic only. Any diverging into politics will be shut down.
 
  • Like
Likes berkeman and pinball1970
  • #6
jrmichler said:
Thread reopened after Mentor discussion and moved to Medical forum.

Please keep the discussion confined to this topic only. Any diverging into politics will be shut down.
Thanks for re-opening it.

I'm purely interested to know if the methodology of the review is robust, so the discussion could potentially be limited to principles of systematic reviews.

I think the report is robust, but I've read criticisms of it and am interested to hear from the PF community, who would have a better understanding of these things.

I'm reading up on other information as well, but I figured it would be good to ask here as well.

For example, is anyone familiar with the GRADE rating system? Is it widely used and accepted?
 
  • #7
Does anyone know if excluding low quality studies from the synthesis of results, in a systematic review, is bad practice?
 
  • #8
"Low Quality" ≡ "Unreliable"
(either poorly executed or not enough data to show a statistical difference)
(or sometimes carried out by a researcher of low credibility based on other studies by same researcher (poor reputation))
 
  • Like
Likes Lynch101
  • #9
Lynch101 said:
Does anyone know if excluding low quality studies from the synthesis of results, in a systematic review, is bad practice?
In this field, you might find a study on the incidence of suicidal thoughts pre- and post-treatment. All studies have an attrition rate (people who answer the first time but not the second), but here there's a good chance that some attrition is suicide, and if you don't follow up your non-responders you have a (potentially very large) bias. But following up non-responders is expensive, so it may not happen. So what should a systematic review do with a study where it doesn't? Maybe they can apply some model to estimate a correction for the attrition rate, but unless there's a strongly evidenced model they can use that's just guessing. They may just have to say "the methodology here is too flawed to show anything useful".

Certainly you should have some reasonably objective measure for whether a study is worth including or not (I presume that's what the GRADE system is supposed to do). And you can argue about a particular study's quality. But why would giving little-to-no weight to garbage be bad practice?
 
  • Like
Likes Lynch101 and berkeman
  • #10
Lynch101 said:
Does anyone know if excluding low quality studies from the synthesis of results, in a systematic review, is bad practice?
Of course, it only matters if the low-quality studies yield a different answer than high-quality studies. So the question boils down to "how many low-quality studies would it take to convince you a high-quality study was wrong?" 2? 10? 50?

(Note: this is a general answer - I am not qualified to comment on the quality of any study under discussion)
 
Last edited:
  • Like
Likes Lynch101
  • #11
Ibix said:
In this field, you might find a study on the incidence of suicidal thoughts pre- and post-treatment. All studies have an attrition rate (people who answer the first time but not the second), but here there's a good chance that some attrition is suicide, and if you don't follow up your non-responders you have a (potentially very large) bias. But following up non-responders is expensive, so it may not happen. So what should a systematic review do with a study where it doesn't? Maybe they can apply some model to estimate a correction for the attrition rate, but unless there's a strongly evidenced model they can use that's just guessing. They may just have to say "the methodology here is too flawed to show anything useful".

Certainly you should have some reasonably objective measure for whether a study is worth including or not (I presume that's what the GRADE system is supposed to do). And you can argue about a particular study's quality. But why would giving little-to-no weight to garbage be bad practice?
I would completely agree with the answer to your rhetorical question. There have been (what I believe are) spurious objections to the report, but I want to make sure I'm not missing anything.

For the report a number of peer-reviewed, systematic reviews were commissioned. [Some of] those reviews used the GRADE system (which is widely accepted, I believe). They used predefined inclusion/exclusion criteria and then used an amended version of the Newcastle-Ottawa scale (for non-randomised trials) to assess the quality of the studies. Most were judged to be either of low or moderate quality, with only one hight quality study.

The report then says, "The low quality studies were excluded from the synthesis of results."

This seems perfectly reasonable because, as per your question, why would you give weight to unreliable data?

But some of the objections I have read have claimed that all studies which were not excluded due to the exclusion criteria, should be included in the evidence synthesis.

Is there a nuance between "evidence synthesis" and "synthesis of results" that I am missing perhaps?
 
  • #12
Vanadium 50 said:
Of course, it only matters if the low-quality studies tield a different answer than high-quality studies. So the question boils down to "how many low-quality studies would it take to convince you a high-quality study was wrong?" 2? 10? 50?

(Note: this is a general answer - I am not qualified to comment on the quality of any study under discussion)
Ah yes good point.

I think some people are objecting to the exclusion of low quality studies because they think those studies would support a particular narrative.
 
  • #14
  • #15
Lynch101 said:
Is there a nuance between "evidence synthesis" and "synthesis of results" that I am missing perhaps?
Not that I can see. I think @Vanadium 50's response is spot on.
Lynch101 said:
I think some people are objecting to the exclusion of low quality studies because they think those studies would support a particular narrative.
I think that's exactly what's going on. Cass excludes or ignores a lot of studies that claim to support one side of the debate. She says it's because they're low quality so don't actually add anything to either side; some critics say it's because she pre-decided what the outcome would be.

As I said before, you probably want to read a couple of the low quality studies and Cass' reviews of them and form your own opinion. I suspect the review is fair and her critics are biased, but official investigations certainly can have their conclusions written first.
 
  • Like
Likes pbuk and Lynch101
  • #16
Lynch101 said:
Apologies, I'm not sure I follow. Do you mean you don't consider the GRADE system to be very rigorous?
It is rigorous rationalization of idiopathic confirmation bias; so, no, I do not consider it to be anything more than psycho-/philoso-babble.
 
  • Skeptical
  • Like
Likes pbuk and Lynch101
  • #17
Bystander said:
It is rigorous rationalization of idiopathic confirmation bias; so, no, I do not consider it to be anything more than psycho-/philoso-babble.
I'm not sure I see the issue. It seems like a reasonable list of factors one should take into account when judging the strengths and weaknesses of evidence.
 
  • Like
Likes pbuk and Lynch101
  • #18
Bystander said:
It is rigorous rationalization of idiopathic confirmation bias; so, no, I do not consider it to be anything more than psycho-/philoso-babble.
As far as I know, it's a widely accepted standard.

I guess good or bad, the nature of objections isn't so much with the particular system used but more whether it was followed or not.
 
  • Like
Likes pbuk
  • #19
Ibix said:
Not that I can see. I think @Vanadium 50's response is spot on.

I think that's exactly what's going on. Cass excludes or ignores a lot of studies that claim to support one side of the debate. She says it's because they're low quality so don't actually add anything to either side; some critics say it's because she pre-decided what the outcome would be.

As I said before, you probably want to read a couple of the low quality studies and Cass' reviews of them and form your own opinion. I suspect the review is fair and her critics are biased, but official investigations certainly can have their conclusions written first.
Cheers. I'll have to read some of the low quality studies in more detail. I've had a look at the NICE* reviews of them.

I share your suspicions, that the review is fair (to a relatively high degree) and that the critics are biased. I'm in discussion with one such critic who is a researcher, who claims to be writing a paper outlining the criticisms.

I'm just trying to look into some of the criticisms he has already mentioned in public, because I anticipate these will form the basis of the paper - if one is indeed forthcoming and it's not just a face saving claim.

*The National Institute for Health and Care Excellence conducted the systematic reviews of the evidence.
 
  • Like
Likes pbuk
  • #20
  • #21
One could (were one bored enough) argue for a long time about an objective evaluation of subjective data. The Cass review appears to (correctly, IMO) do little more than point out that there isn't enough reliable data to have much of a conversation.
 
  • Like
Likes Lynch101 and Bystander
  • #22
Bystander said:
I do not consider [GRADE] to be anything more than psycho-/philoso-babble.
You are entitled to your opinion.

Bystander said:
[GRADE] is rigorous rationalization of idiopathic confirmation bias
But that is a personal theory that is contrary to mainstream science (GRADE is a widely recognised tool of evidence-based medicine whose aim is to eliminate confirmation and other biases).
 
Last edited:
  • Like
Likes Lynch101
  • #23
Dullard said:
The Cass review appears to (correctly, IMO) do little more than point out that there isn't enough reliable data to have much of a conversation.
No, the Cass report does much more than that.

In particular it makes 32 specific recommendations (summarised here) and has led to the National Health Service in England (NHS England) restructuring its provision of gender identity services for children and young people, and changing its clinical policy on the prescription of puberty-supressing hormones.
 
  • Like
Likes Lynch101
  • #24
Dullard said:
One could (were one bored enough) argue for a long time about an objective evaluation of subjective data. The Cass review appears to (correctly, IMO) do little more than point out that there isn't enough reliable data to have much of a conversation.
Cheers Dullard, that is my interpretation of it as well*, but there are attempts to discredit it.

*That there isn't enough reliable data [upon which to base serious medical interventions].
 
  • #25
To be fair, an individual's perception of the report probably boils down to a single question:
Is evidence required to justify treatment, or to prohibit it? The report does have a 'justify' bias (for those who consider that a 'bias.')
 
  • #26
Dullard said:
To be fair, an individual's perception of the report probably boils down to a single question:
Is evidence required to justify treatment, or to prohibit it? The report does have a 'justify' bias (for those who consider that a 'bias.')
I think the point is that the lack of evidence doesn't only mean that we don't know if the treatment does anything, but also that we don't know if it is actively harmful. Where there is reliable evidence, it seems to indicate a higher incidence of psychological issues in these patients than in the general population. If a patient's gender issue is a symptom of something else, that something else needs treating and the gender problems will resolve themselves, whereas treating the gender issue won't fix the underlying psychological problem. And the lack of evidence means that we don't know (our opinions aside) which way around the causation is.
 
  • Like
Likes jim mcnamara and Lynch101
  • #27
Lynch101 said:
Cheers. I'll have to read some of the low quality studies in more detail. I've had a look at the NICE* reviews of them.

I share your suspicions, that the review is fair (to a relatively high degree) and that the critics are biased. I'm in discussion with one such critic who is a researcher, who claims to be writing a paper outlining the criticisms.

I'm just trying to look into some of the criticisms he has already mentioned in public, because I anticipate these will form the basis of the paper - if one is indeed forthcoming and it's not just a face saving claim.

*The National Institute for Health and Care Excellence conducted the systematic reviews of the evidence.
The issue critics have with the reviews of the evidence is that they rate many of them poor quality because they are not blind and they have no control group. Rating evidence as poor quality based on these parameters doesn't make sense. You can't have a blind study since puberty is very visible to pretty much all who go through it, so you can't have blind participants or researchers. Whether the subject is receiving the medicine or not would be apparent very quickly.

You also can't have an ethical study on hrt or puberty blockers with a control group, whether that control group is blind or not. If it isn't blind, the control group will have low adherence, as they will seek out the treatment in other ways for example. If it is blind, they might seek out other options for treatment too once they are convinced it isn't working.

Telling a child experiencing gender dysphoria that they will not experience the results of puberty, like breast growth or increased bone density, would also be highly unethical and psychologically damaging if they then start to see those results despite taking the placebo.

This article explains multiple other issues with the review and I recommend reading it if you've read the Cass Review and are taking its findings seriously
https://law.yale.edu/sites/default/files/documents/integrity-project_cass-response.pdf
 
Last edited by a moderator:
  • #28
dig6394 said:
You can't have a blind study since puberty is very visible to pretty much all who go through it, so you can't have blind participants or researchers. Whether the subject is receiving the medicine or not would be apparent very quickly.

You also can't have an ethical study on hrt or puberty blockers with a control group, whether that control group is blind or not. If it isn't blind, the control group will have low adherence, as they will seek out the treatment in other ways for example. If it is blind, they might seek out other options for treatment too once they are convinced it isn't working.
If you are paraphrasing accurately, this would seem to be agreement that the evidence base is poor, plus a claim that it cannot be made better, wouldn't it?

I'll take a look at the report.
 
  • Like
Likes Lynch101
  • #29
dig6394 said:
The issue critics have with the reviews of the evidence is that they rate many of them poor quality because they are not blind and they have no control group. Rating evidence as poor quality based on these parameters doesn't make sense. You can't have a blind study since puberty is very visible to pretty much all who go through it, so you can't have blind participants or researchers. Whether the subject is receiving the medicine or not would be apparent very quickly.

You also can't have an ethical study on hrt or puberty blockers with a control group, whether that control group is blind or not. If it isn't blind, the control group will have low adherence, as they will seek out the treatment in other ways for example. If it is blind, they might seek out other options for treatment too once they are convinced it isn't working.

Telling a child experiencing gender dysphoria that they will not experience the results of puberty, like breast growth or increased bone density, would also be highly unethical and psychologically damaging if they then start to see those results despite taking the placebo.

This article explains multiple other issues with the review and I recommend reading it if you've read the Cass Review and are taking its findings seriously
https://law.yale.edu/sites/default/files/documents/integrity-project_cass-response.pdf
The studies weren't rated as poor because they weren't RCTs, there were other methodological flaws which got them the "very low" quality rating, issues such as lack of long term follow up.

Despite not being RCTs, they could still have been rated as "moderate" quality, or even just "low quality".

Studies which aren't RCTs can also be upgraded if they are designed well enough.

It's just a simple matter of fact that RCTs can provide higher quality evidence, that's why they are carried out, where possible. Otherwise researchers wouldn't bother going to the effort.

Still, that's not why they were downgraded.

I'll check out the link though, cheers.
 
  • #30
Thread closed for Moderation...
 
  • #31
After a Mentor discussion and the deletion of one somewhat OT post, the thread is reopened provisionally. Please remember the earlier caution in the thread to stay on topic in discussing this report. Thank you.
 
  • #32
dig6394 said:
This article explains multiple other issues with the review and I recommend reading it if you've read the Cass Review and are taking its findings seriously
https://law.yale.edu/sites/default/files/documents/integrity-project_cass-response.pdf
BMJ paper commenting on the Integrity Project paper linked above (McNamara et al) and another (Noone et al):
https://adc.bmj.com/content/early/2024/10/13/archdischild-2024-327994.info

It's only five pages long. Highlights (IMO) are that the authors see McNamara's paper as effectively a legal position paper (not peer reviewed, but immediately submitted in court cases) and not an attempt at objective commentary, and that Cass is, if anything, too generous in her ratings of the reliability of studies (Chen et al, Tordoff et al) that McNamara suggests were unfairly excluded.
 
  • Informative
  • Like
Likes Bandersnatch and pbuk
  • #33
Ibix said:
and another (Noone et al):
I can't tell if this is a joke or not.

"Who wrote the paper?"

"Noone et al."
:oldlaugh:
 
  • Haha
Likes BillTre
  • #35
berkeman said:
I can't tell if this is a joke or not.

"Who wrote the paper?"

"Noone et al."
:oldlaugh:
The Odysseus and the Cyclops resonance did cross my mind. However, I gather it's probably a variant on the surname O'Nuadhain, which means son of Nuadha (an Irish first name).
 

Similar threads

Back
Top