Bias in reporting of end points of efficacy and toxicity in randomized, clinical trials for women with breast cancer facilitated by Associate Professor Charles Leduc


Vera-Badillo FE, Shapiro R, Ocana A, Amir E, Tannock IF. Bias in reporting of end points of efficacy and toxicity in randomized, clinical trials for women with breast cancer. Annals of Oncology. 2013. (advance access 9 January 2013, doi:10.1093/annonc/mds636)

Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and Interpretation of Randomized Controlled Trials With Statistically Nonsignificant Results for Primary Outcomes. JAMA.2010;303(20):2058-2064. doi:10.1001/jama.2010.651.

Summary of the study

Many clinicians and reviewers make their minds up on an intervention after only reading the abstract.  Can this lead to a misunderstanding of the outcomes of an RCT?  This paper is a pragmatic systematic review aiming to quantify “Bias” and “Spin” in the reporting of results in the abstracts of breast cancer therapy RCTs.

They performed a pragmatic literature search: Medline only; English only; ended in August 2011; no small studies (< 200 participants); and focussed on trials that could « change clinical practice ».  The scales used were not validated but the process is clearly presented.  Of note is the short time between the last period of review (August 2011) and the online publication date (January 2013), about 16 months.

The paper essentially shows that non-significant results (no benefit for experimental arm) in regards to Primary Endpoints are under-reported in the abstract and replaced by significant results observed in secondary endpoints.  Unfavourable results tend to be suppressed from the abstract results and/or conclusion sections.  Interestingly, in significant favourable outcome situations, the importance of severe toxicity is under-reported.   A different approach used by Boutron showed that the underreporting and the spin also occurred in the main paper. The main finding can be summarised by the authors finding that spin and bias were used to suggest efficacy in 59% of the trials that had no significant difference in their Primary Endpoints.

Also of note; industry funding was not associated with biased reporting of toxicity and the journal impact factor was not associated with biased reporting of toxicity.

The group also found the “Hierarchy scale for reporting of adverse events” very interesting and practical.


Trying to understand why reporting bias and spin were so frequent, it was noted that potentially unrealistic primary outcomes for trials may compel authors to spin non-significant results. We also noted that registration of the trials does not appear to bind authors to reporting registered primary endpoints.

We also acknowledge that current instructions to authors writing abstracts indicate that they must “highlight what is interesting” which is often interpreted as reporting positive and significant results rather than relevant non-significant primary endpoint results.  Guidelines for reporting results of trials do not include criteria related to consistency between the stated primary endpoints, reported results in the main paper, and reported results in the abstract.  It might be worthwhile to include such a step to avoid the biases documented in the referenced papers.

The Boutron et al paper reported on the “severity of spin” and considering the prevalence of spin confirmed in this paper we may want to find out if the presence of spin is associated with more citations, and if it is, can we find a dose-effect relationship between spin severity and citation frequency.

Again, Caveat lector!