Comparisons of established risk prediction models for cardiovascular disease: systematic – facilitated by Senior Research Fellow Georga Cooke

Reference

Siontis, George, et al. “Comparisons of established risk prediction models for cardiovascular disease: systematic review.” BMJ: 344 (2012).

 

Abstract

OBJECTIVE:  To evaluate the evidence on comparisons of established cardiovascular risk prediction models and to collect comparative information on their relative prognostic performance.

DESIGN:  Systematic review of comparative predictive model studies.

DATA SOURCES:  Medline and screening of citations and references.

STUDY SELECTION:  Studies examining the relative prognostic performance of at least two major risk models for cardiovascular disease in general populations.

DATA EXTRACTION:  Information on study design, assessed risk models, and outcomes. We examined the relative performance of the models (discrimination, calibration, and reclassification) and the potential for outcome selection and optimism biases favouring newly introduced models and models developed by the authors.

RESULTS:  20 articles including 56 pairwise comparisons of eight models (two variants of the Framingham risk score, the assessing cardiovascular risk to Scottish Intercollegiate Guidelines Network to assign preventative treatment (ASSIGN) score, systematic coronary risk evaluation (SCORE) score, Prospective Cardiovascular Münster (PROCAM) score, QRESEARCH cardiovascular risk (QRISK1 and QRISK2) algorithms, Reynolds risk score) were eligible. Only 10 of 56 comparisons exceeded a 5% relative difference based on the area under the receiver operating characteristic curve. Use of other discrimination, calibration, and reclassification statistics was less consistent. In 32 comparisons, an outcome was used that had been used in the original development of only one of the compared models, and in 25 of these comparisons (78%) the outcome-congruent model had a better area under the receiver operating characteristic curve. Moreover, authors always reported better area under the receiver operating characteristic curves for models that they themselves developed (in five articles on newly introduced models and in three articles on subsequent evaluations).

CONCLUSIONS:  Several risk prediction models for cardiovascular disease are available and their head to head comparisons would benefit from standardised reporting and formal, consistent statistical comparisons. Outcome selection and optimism biases apparently affect this literature.

 

Discussions of the group

  • This was a challenging paper to read for many of our group.
  • This paper assumed a deep understanding of statistical techniques used in prognostic models.  Our resident experts gave us a brief tutorial on discrimination, calibration and reclassification.
  • Optimism bias was a new concept to many – we debated how this was different to publication bias.
  • We discussed how we communicate risk, natural history and prognosis with patients.
  • Overall, this study would not change what we do in clinical practice, rather support our current practice.  We will look forward to future publications around aligning a methodological approach to systematic review of risk prediction studies.

 

Follow up reading

For those wanting to cement the mini-tutorial we had on discrimination, calibration and reclassification, these articles may be of interest:

Cook, Nancy R. “Statistical evaluation of prognostic versus diagnostic models: beyond the ROC curve.” Clinical Chemistry 54.1 (2008): 17-23.

Echouffo-Tcheugui, Justin B., and Andre P. Kengne. “Risk models to predict chronic kidney disease and its progression: A systematic review.” PLoS medicine 9.11 (2012): e1001344.