Quantifying and monitoring overdiagnosis in cancer screening: a systematic review of methods

Journal Club Summary

4 March 2015

Facilitated by Thanya Pathirana (PhD Candidate, CREBP)

Background

Screening leads to identification of some asymptomatic cancers which would otherwise fail to clinically manifest or cause a cancer related death during a patient’s lifetime (Sandhu and Andriole, 2012). This phenomenon, known as overdiagnosis is being increasingly considered as the most significant harm associated with early cancer detection (Welch and Black, 2010). Estimating the prevalence of overdiagnosis in cancer screening is vital to health care providers, patients and policy makers alike.  Although previous evidence has described several methods that could be utilised in quantifying and monitoring overdiagnosis over time, their estimates vary widely. Thus, this systematic review was conducted to evaluate these different methodologies and explore the advantages and disadvantages in estimating the magnitude of overdiagnosis and for monitoring overdiagnosis over time (Carter et al., 2015).

Paper presented

Eligibility criteria for inclusion

Population: Studies that attempt to quantify overdiagnosis resulting from a cancer screening test in an asymptomatic population and studies that draw conclusions about an amount of overdiagnosis based on biologic characteristics of tumour progression

9 Cancer types included: prostate, breast, lung, colon, melanoma, bladder, renal, thyroid, uterine

Intervention: Method for measuring, estimating, or quantifying overdiagnosis

Outcome:  Magnitude of overdiagnosis

Time Frame: Any time frame (Up to 28 February 2014)

Setting: Any setting

Study Design: Randomized controlled trials (RCT), prospective or retrospective cohort

studies, ecologic studies, case control studies, modeling studies, Systematic reviews that independently computed a new estimate of overdiagnosis based on data from identified studies

(Excluded: Articles not published in English, Case series, case reports, non systematic reviews)

Critically appraisal

The authors used a comprehensive strategy to search Pubmed and Medline databases and additional hand searching was undertaken. However, publication bias was not assessed. The selected articles were independently reviewed by two reviewers. Data extraction was done through a standardized form and verified by a second reviewer. The 52 studies included in the review were assessed for risk of bias and overall strength of evidence based on standard criteria. The risk of bias of cohort, ecological, RCTs, pathological and imaging studies were assessed for selection bias, measurement bias and confounding whereas modelling studies  were assessed for transparency and evidenced based nature of the assumptions, probability of bias in data used in the model, external validation of the model and performance of sensitivity analysis for uncertain variables. Strength of evidence was assessed for risk of bias, analysis (for ecologic, cohort and RCT), Directness, External validity, Precision and Consistency. Due to the heterogeneity of the study designs, populations, and results, a qualitative data synthesis was performed organizing the results by study design and cancer type.

Summary of results

Nine hundred and sixty eight abstracts and 120 full texts were reviewed and 52 individual studies were selected to be included. These were categorised in to four methodological groups: RCT (n=3), Cohort and ecological studies (n=20), pathological and imaging studies (n=8) and modelling studies (n=21).

The authors argued that RCTs had low risk of bias but may not be generalizable and are not suitable for monitoring.

The pathological and imaging studies were based on conclusions made on overdiagnosis through examination of biological characteristics of cancers. This is a simple design based on the uncertain assumption that the measured characteristics are closely correlated with disease progression.

Modelling studies are less time consuming but limited by the complex mathematical equations that simulate the natural course of screen detected cancer (which is the fundamental unknown question).

Ecological and cohort studies are valuable in monitoring “real world” overdiagnosis over time compared to RCTs which require more resources and time as well as lack external validity.  However, ecological and cohort studies are limited by several factors including the lack of agreed standards, variable data quality, inadequate follow-up time, and potential for population level confounders although several ecological and cohort studies had addressed these limitations reasonably well.

Discussion/Journal Club commentary

The study concluded that well conducted ecological and cohort studies in multiple settings were the most appropriate study design to quantify and monitor overdiagnosis in cancer screening programs. Establishment of a team of multinational, unbiased researchers working on internationally agreed standards for ecological and cohort studies was recommended.

The journal club agreed that cohort studies, when conducted in a methodologically sound manner, had a high potential for accurately estimating and monitoring overdiagnosis over time by providing a more “real world” view of overdiagnosis. However, we did not quite agree that it was the best approach as RCTs are known to have the least bias and a high strength of evidence compared to cohort and ecological studies. As the authors of this study have highlighted, we agree that results of the RCTs may lack generalizability due to limited external validity. Therefore we suggest that estimates from both these study designs should be used in conjunction to provide a more accurate as well as generalizable estimate of overdiagnosis in cancer screening programmes.

This review is unique in being the first study which systematically reviews different study methodologies that have been used to estimate overdiagnosis in cancer screening. Thus, we suggest that this may provide the background upon which similar studies could be conducted to estimate overdiagnosis occurring in other chronic disease screening programmes as well.

References

CARTER, J. L., COLETTI, R. J. & HARRIS, R. P. 2015. Quantifying and monitoring overdiagnosis in cancer screening: a systematic review of methods. British Medical Journal, 350, g7773-g7773.

SANDHU, G. S. & ANDRIOLE, G. L. 2012. Overdiagnosis of prostate cancer. Journal of the National Cancer Institute.Monographs, 2012, 146-151.

WELCH, H. G. & BLACK, W. C. 2010. Overdiagnosis in cancer. Journal of the National Cancer Institute, 102, 605-613.

To read further responses for this paper, go to:

http://www.bmj.com/content/350/bmj.g7773/rapid-responses