Raine, Rosalind, Sanderson, Colin, Hutchings, Andrew et al. · Lancet (London, England) · 2004 · DOI
This study examined how doctors develop medical guidelines by testing whether expert opinions match what research evidence actually shows. Researchers asked different groups of doctors to rate whether certain treatments (like therapy or antidepressants) were appropriate for conditions including chronic fatigue syndrome. They found that doctors' judgments agreed with published research only about half the time, and that having a literature review and certain group compositions helped improve accuracy.
For ME/CFS patients, this study is crucial because it reveals that clinical guidelines—which shape treatment recommendations and access to care—may not consistently reflect actual research evidence. Understanding how expert consensus is formed helps patients and advocates recognize when guidelines might need updating based on new evidence, and highlights the importance of ensuring that robust research data influences clinical practice recommendations.
This study does not demonstrate which treatments are actually effective for ME/CFS, nor does it evaluate treatment outcomes. It also does not prove that guideline development *should* be based entirely on evidence alone, only that current consensus methods often diverge from published literature. The findings reflect the guideline development process itself, not the validity of the treatments being rated.
About the PEM badge: “PEM required” means post-exertional malaise was an explicit required diagnostic criterion for participant inclusion in this study — not that PEM was studied, observed, or discussed. Studies using criteria that do not require PEM (e.g. Fukuda, Oxford) are tagged “PEM not required”. How the atlas works →
Spotted an error in this entry? Report it →