The systems that regulate how drugs come to market are failing the very people that they are supposed to be protecting. Behaviour that has been challenged for decades without resolution has now been publicly exposed through the work of people like Ben Goldacre and the AllTrials campaign. One important facet discussed by Goldacre in his book, Bad Pharma, is the reality of reporting bias and results spin perpetrating clinical trials. Reporting bias, such as selectively revealing or supressing drug-patient interaction, can make a drug look more effective than it really is or disguises potentially harmful side-effects. Essentially, report bias is under-reporting the undesirable results from a clinical trial so that the benefits are maximised and the desirable hypothesis is proven. There is a fundamental problem with this in that it violates the standards set by the scientific method and thus, renders any information obtained from such trials as tainted. A hypothesis should not be desirable but should be part of an idea to be proved or disproved. Accepting that there would be a desirable outcome from testing the hypothesis automatically makes the experiment bias.
There’s so much to write about when it comes to skewed clinical trials and evidence reporting and I encourage you to read further. However, what I really want to discuss is a new commentary released by NICE (National Institute for Health and Care Excellence), the guidance body that makes recommendations to the NHS on which drugs they should buy. It’s important to note that NICE can only make recommendations on a particular drug based on the evidence available to them. Sometimes this can be very little and what is available is often bias. The commentary was released through NICE’s ‘Eyes on Evidence’ bulletins which aims to provide access to significant new evidence as it emerges. One section of the November bulletin caught my eye as it was titled ‘Bias in reporting of randomised controlled clinical trials in breast cancer’. If you would like to know more about randomised controlled clinical trials (RCT’s) please click here.
I think the bulletin does a good job of reporting the evidence from the source publication (£) but lacks some of the important conclusive remarks made by the authors. The study in question is a literature review, meaning that the authors went to the effort of finding the result of every RCT published between 1995 and 2011, and analysing them for bias reporting. It sounds like a large job, and it is. One way to tackle the review is to take all the RCT publications and compare the findings against what the trial aimed to achieve when it was publically registered. When a trial is being designed the researchers will select a primary end-point – the final measure that determines the success of a trial. For a cancer drug this may be something like ‘overall survival’ (OS), ‘disease free survival’ (DFS) or ‘progression free survival’ (PFS). What’s important here is that the primary end-point is registered so that when the study data is released it will be clear whether or not the conclusions have been reached according to the registered end-point. All clinical trials are supposed to be publically registered before they start in databases such as clinicaltrials.gov but practice is not enforced and many trials remain unregistered. This is clear in the current review where it was noted that from the 164 trials selected only 30 (18%) were registered at clinicaltrials.gov prior to their start. However, we should be very careful when interpreting this finding because clinicaltrials.gov only started in 2000 and at conception only included trials registered in the US, with EU trials being asked to register later on. This means that any RCT in breast cancer from 1995-2000 is unlikely to be in the database.
What I think is more shocking is that from these 30 trials only 7 reported the same primary end-point than was registered. This represents a serious failure to be transparent and makes post-trial evaluation by independent reviewers nearly impossible. How does one review the effectiveness of a drug if the primary aim of the treatment is unclear? In my opinion, changing the primary end-point looks like admission of spin and bias in the trial.
Reporting bias as a result of unclear end-points was evident amongst the 164 trials reviewed. A total of 54 trials reported positive results based on non-primary end-points despite not finding a statistically significant difference in the primary end-point. Essentially, if the aim of a new drug was to reduce deaths by breast cancer and the trial failed to show that, a positive conclusion can still be reached based on surrogate outcomes. Interestingly, if you look at only the trials reporting no statistical significance for the primary end-point, you find that the percentage reporting positive results on surrogate outcomes increases dramatically. This is a very sneaky way to spin the data to fit your preferred conclusion. Currently, there is nothing legally wrong with this but it is irresponsible, misleading and means that the real-life risks and benefits of the drug are disguised from doctors and patients.
Another issue raised by the authors was the tendency to use DFS or PFS rather than OS as a primary end-point. OS is a much better way to assess the effectiveness of a life-saving drug because it reports deaths by any cause and not just the condition being treated. On merit, this takes into account deaths caused by drug side-effects, as well as deaths resulting from disease relapse. On the other hand DFS and PFS only take into account the time before relapse or when the disease gets worse. The review discussed here found that only 27 of the 164 trials (16.5%) reported OS as the primary end-point. For women with breast cancer, neither DFS nor PFS have been shown to be adequate alternatives for OS, yet over 80% of the trials reviewed used them as end-points.
It was also noted that doctors will often only read the abstract of concluding remarks of a new clinical trial due to time constraints. It is therefore essential that primary end-points are clearly described in these sections of a publication so that doctors can review the evidence accurately. Primary end-points were rarely reported in the abstract or conclusions of the trials reporting positive results despite a non-significant primary end-point.
It would be nice to believe that this reporting bias is abnormal, but in fact the statistics presented in the article are consistent with reviews conducted elsewhere and on separate medical conditions. This review simply adds more evidence to the fact that this fraudulent and misleading behaviour is rife amongst clinical trial reporting. If there was ever a need for all trials to be published transparently, made fully accessible and regulated with enforcement, then it is now.
Please join the All Trials campaign and help put healthcare back in the hands of doctors and patients.