The challenges of determining noninferiority margins: a case study of noninferiority randomized controlled trials of novel oral anticoagulants facilitated by Associate Professor Charles Leduc

References

The challenges of determining noninferiority margins: a case study of noninferiority randomized controlled trials of novel oral anticoagulants. Grace Wangge MD MSc, Kit C.B. Roes PhD, Anthonius de Boer MD PhD, Arno W. Hoes MD PhD, Mirjam J. Knol PhD. CMAJ 2013; 185(3):222 – 227

 

The paper presents a way to produce a synoptic determination of common non inferiority (NI) margins for a set of noninferiority trials. It further allows for an evaluation of the impact of the published selected noninferiority margins on the published conclusions. The discussion uses a review of NI trials of direct thrombin inhibitors and direct inhibitors of facto Xa claimed to be as effective as enoxaparin for the prevention of venous thromboembolism in patients undergoing elective hip- or knee-replacement surgery.

The paper triggers two different processes; firstly revisit the appraisal rubrics of noninferiority trials and secondly discuss the process of presenting usable information from a review of noninferiority trials. The appraisal requires judgment on the following: respect of noninferiority trial assumptions, namely assay sensitivity and constancy of effect, the method used to define the noninferiority margin (NI) or threshold (the focus of the paper), and the assessment of potential harm.  The assumptions are briefly discussed and we agree that the topic of the review supports a rare set of conditions that allow the authors to conclude prudently the assumptions were probably respected. There is no presentation of the potential harms from the test interventions.

The steps needed to generate the numerical data necessary for the analysis of noninferiority trials are: first estimate the effect of the active control (C) compared with placebo.  For a conservative estimate of the effect size, the upper bound of the 95% confidence interval (CI) of the pooled effect size is used rather than the point estimate; this is referred to as M1. This is followed by defining M2 which is an estimate of how much of M1 should be preserved. It corresponds to the largest clinically acceptable difference in term of decreased efficacy (degree of inferiority) of the test drug (T) compared with the active control (C).  The “loosest” estimate is 50% of M1. This becomes the noninferiority margin (NI). Noninferiority margin selection is a clinical decision, much like expected effect size when calculating sample size in superiority trials; it approaches the same concept of clinically relevant difference between 2 interventions. We were reminded that the selection of the non-inferiority margin must take into account other clinically relevant considerations like adverse outcomes. For example, a wider margin (tolerating a new intervention that would preserve only 50% of the original effect size) will be tolerated for a new intervention that produces less frequent/severe adverse outcomes.

For this report the authors independently defined M1 by pooling results from original studies. The advantage of this process is on developing noninferiority margins (NI) independently of the reviewed noninferiority studies.  By presenting the studies’ data in a forest plot, the position of the NI threshold can be shown in respect to each study’s delta (difference between Control intervention and Test intervention, C-T) and confidence interval.  The paper presents forest plots of both Risk Difference (RD) and Relative Risk difference (RR) but the tables were hidden in Appendix 4, not presented in the paper but available from the CMAJ website.  The detailed presentation of the method used to identify the studies is presented in Appendix 1 (CMAJ website).  The authors report they only used published papers and did not attempt retrieving unpublished reports and rightfully identify publication bias as an important source of error in determining the NI margin. The effort of defining the M1, the original pooled effect size and its confidence interval, must follow proper systematic review / meta-analysis practice.  This alternative is to be preferred to the aggregation of published margins of noninferiority.

A synoptic study of noninferiority trials should favour using the relative risk difference (RR) instead of the risk difference (RD) in order to control the effect of baseline risk on the absolute difference (RD). Using RD requires demonstrating that the baseline risk has not differed significantly between the T and the C studies.

Discussions of the group

We agreed with the author’s conclusion that: “… substantial variation in the noninferiority margin existed between the trials, suggesting that the different clinical judgments and perceptions of the investigators played a role.” We support the view that a systematic review of noninferiority trials should independently estimate the noninferiority margin(s) and produce a graphical representation of the trials’ outcomes showing both the trials’ published NI margin and the estimates. In a synoptic study, the forest plot should allow the reader to select a relevant M2, whether it is 50%, 67% or a different proportion of M1. It should be directly obtainable from the presented graph.

The flood of noninferiority trials raises the possibility that we will face a dearth of placebo-controlled efficacy trials.   It is possible to include a placebo arm in almost all comparative studies, much like the protocols used in the evaluation of cancer treatments.  Participants who cannot tolerate the control intervention can become part of a de facto placebo group.  Comparing the addition of the new treatment to the control treatment to the control treatment alone can show a potential independent increment in treatment efficacy. We were reminded that hitching new treatments to progressively less efficacious control treatments in sequential noninferiority trials leads to the ultimate studies showing no effect over placebo; this is the slippage to placebo bias or “biocreep”. We remarked that we may one day confirm noninferiority of a new treatment to a control treatment that was never directly shown to be better than placebo.

References of interest:

D’Agostino RB S, Massaro JM, Sullivan LM. Non-inferiority trials: design concepts and issues — the encounters of academic consultants in statistics. Stat Med  2003;22:169-86.

U.S. Department of Health and Human Service, Food and Drug Administration, Center for Drug Evaluation and Research (CDER), Center for Biologics Evaluation and Research (CBER), Office of Biostatistics and Office of New Drugs. Guidance for Industry Non-Inferiority Clinical Trials: Draft Guidance. March 2010. 66p. (Robert Temple lead author)
(http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM202140.pdf)