The 3R’s for better science

The use of animals for medical research is a sensitive and complex topic.  While the majority of people in the UK understand and support the need for animal experiments to advance medical research, many may not be aware of the united front within the research community to ensure that these experiments are carried out to the highest standards.  The National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs) is leading this movement to help researchers and research funders improve animal experimentation, and ultimately produce better science.

The 3R’s stand for Replacement, Refinement and Reduction, and identify the key ways research can be improved when it involves the use of animals.  Replacement encourages research which seeks to develop or utilise non-animal methods, refinement identifies ways in which experimental methods can be produced to improve animal welfare, and reduction encourages better study design to reduce the number of animals used in research.  The 3R’s are adopted by funding bodies such as medical research charities, the Medical Research Council and the Wellcome Trust and adoption of the principles is required before funding is released.

Making science better

Adhering to and promoting the 3R’s is not just about protecting the animals, it’s about making science better, more efficient, and more reproducible, so that the animals used in experiments are not wasted.  We should be asking researchers to disclose full details of their animal work not only so we can scrutinise how well they’ve treated their animals but also so we can see that the work is justified.

There are several things a researcher can do to adhere to the 3R’s.  Carefully planned and well thought out experimental design can reduce the sample size of animals needed.  Asking for advice from a statistician when designing protocols can identify areas where numbers of animals can be reduced through accurate prediction of animal numbers needed and power calculations.  Understanding what other statistical options are available could also reveal more effective ways to conduct experiments.  There also needs to be in place plans to minimise experimental bias to enhance the quality of research and prevent over-use of animals to achieve robust results.  This could be as simple as blinding the analysis of animal tissue to investigators, ultimately producing better data sets with fewer animals.

Reporting for reproducibility

In 2010, the NC3Rs published a paper in PLoS Biology highlighting the need for better reporting of animal research in published studies.  The statistics they report are alarming, only 59% of 271 randomly selected articles stated a research objective and the number of animals or species used, 87% did not report randomising their studies and 86% did not report any blinding to reduce bias.  Only 70% properly reported statistics and presented the results with a measure of precision or variability.

The solution was to develop the ARRIVE guidelines for the reporting of in vivo experiments.  The ARRIVE guidelines are essentially a checklist for researchers to follow when reporting animal research.  Animal experiments can be written up in great detail and submitted to journals as supplementary material should there be content limits set by the editors.  It’s imperative that research funders demand this full disclosure and transparency from their grant holders so that maximal output is achieved from animal experiments, work is reproducible and the need for excessive animal use is avoided.


As discussed above, transparent reporting of animal research is crucial for accuracy and reproducibility of experiments.  This transparency ensures that any deviation from what the researcher expects of their experiment can be picked up and resolved before any unnecessary experiments are carried out.  But there also needs to be transparency in how the animals were developed for experimentation.  For example, if a researcher is developing their own mouse models or is maintaining mouse models for an extended period of time, they need to be reporting phenotype and genotype data to pick up any deviation in the animals background.  Genetic drift occurs rapidly in mouse breeding programmes with over 100 SNP’s appearing in the mouse genome in one generation.  Without full reporting the mouse phenotype may not be reproducible due to the emergence of sub-strains over time.


The 3R’s for a better future

Animal testing is currently necessary for new treatments to be developed for diseases such as cancer and dementia.  Legal regulations require that new treatments are shown to be safe and effective in at least two different animal species before they can be approved for clinical trials in humans.  It is not practical to ban animal testing for medical research as we don’t currently have the methods or alternatives to remove these strict but necessary regulations on drug testing.  The work by the NC3Rs is ensuring that where these experiments have to take place, they are being done to maximise the use of each animal, to produce data which has the biggest impact and maintain the highest level of animal welfare.  On top of this they are promoting and funding work into the replacement of animal models with non-animal alternatives so that we can move towards a future without the need for animal experimentation.  It’s important to remember that it’s not solely about a reduction in numbers of animals being used but a refinement of the techniques, husbandry and reporting, which will ultimately lead to better science.

Why I won’t be eating two more portions a day

The health service, nutritionists, the government and your parents are always telling you to eat more fruit and veg.  Its sound advice, it really is, but how can we actually quantify the amount of fruit and veg a person should eat?  The answer is we can’t.  At least not accurately enough to warrant telling people exactly how many ‘portions’ we should be eating in a day.  The ‘5-a-day’ message is a reasonable one though.  Even if it does seem to suggest there is a set amount of fruit and veg the average person should eat in a day, the real message is plainly eat more fruit and veg because it’s good for you.

So why then do I turn on the TV this morning to find the news barking about how a new study has shown we need to eat 7 portions of fruit and veg a day to cut the risk of dying of common diseases?

The news comes from a study carried out by researchers at University College London who analysed questionnaire data collected by the NHS on people’s diet and lifestyle.  Essentially, what has been reported is that the study indicates that the more fruit and vegetables people ate, the less likely they were to die, at any given age.  In other words – fruit and veg is good for you – not a particularly new message.  So why, oh why, then did they have to go and quantify how much you should be eating to avoid death?  It’s just not possible, and not just because it’s difficult to interpret real world quantities from a self-filled questionnaire, but because the study itself has flaws which make it impossible.  This is not a criticism of the research but more a facet of this type of study which should be respected.


The first thing to note about the study is that it collected data from participants from 2001-2008.  Although this is better than a single measurement it still only represents a relatively short amount of time in a persons life. This isn’t particularly useful when a healthy lifestyle is something you have to maintain rather than just be healthy at the exact moment you were asked.  This isn’t to say that this kind of study isn’t useful but its definitely a limitation worth considering.

Confounding factors

The other big factor which needs to be taken into account is other lifestyle and environmental factors which affect general mortality.  The participants in the study are likely to be exposed to a wide range of different factors such as smoking, drinking, exercise, where they live, what their job is etc.  You can try to take these into account when analysing the data but you’ll never be able to remove them from the equation – particular when you are dealing with a large scale cross-sectional study.  There’s also the issue that healthy eaters tend to live healthier lifestyles.  So how do we know that the benefits of 7-a-day aren’t because those people go to the gym more often or don’t smoke?

Risk reduction paradigm

It’s also worth mentioning that lifestyle interventions that reduce risk against anything need to be assessed alongside the impact of other lifestyle factors.  This is important so we can see the ‘weighting’ of each factor on risk reduction, rather than each in isolation, and as such be able to work out how much impact a certain intervention has to an individual.  For example, if you eat 7-a-day but you are also a heavy smoker; does eating 7-a-day even have an impact on your risk of cancer or heart disease?  The effects of eating 7-a-day, over say 5-a-day, could be so small that the increased risk from smoking makes it irrelevant.  In fact, if you are a heavy smoker it might not be any benefit to eat fruit and veg at all.


The BBC have been cautious over touting a 7-a-day message which is great to see but I haven’t yet seen the full media coverage and I expect there will be some which completely overstate the results in this study.  I don’t think the study authors should have allowed a 7-a-day message to accompany their research because I think they are a long way off showing that eating 7-a-day has any benefit over 5-a-day.  Whichever is the case, the important message is that a healthy diet and a healthy lifestyle are going to be your best bet at having some influence over how and when you die, but I wouldn’t get too hung up on it.

Full study article (Open Access, yay!):

Mercola on Thermography: “it’s cheaper, safer and more accurate”

A couple of weeks ago I was asked to provide some information regarding a new and improved breast cancer screening test which was being touted somewhere in the South of England.  It transpires that the new test was in fact our old friend Thermography, an apparently cheaper, safer and more accurate way to screen for breast cancer.

When someone claims that something is cheaper, safer and more accurate than the current technology than they better have some good evidence to back it up.  This company definitely does not have that but they do have a link to an outrageous document from the Mercola group explaining how and why Thermography is superior to other screening methods.

The document is 16 pages of, to put it politely, fucking bullshit.  But there are some real comedic gems amongst the drivel which I thought were worth sharing.  Oh and if you fancy reading about Dr Mercola, founder of Mercola group, there’s some interesting info here, here and here.

First page of the Mercola evidence book hits you with some straight hard facts:


Yes, well I’m sure there’s a reason why your Doctor isn’t telling you about Thermography, and it’s not because it’s cheaper, safer or more accurate.

Only one paragraph in and they think it’s suitable to just toss this bit in:


I thought this about cancer detection but you know, nothing like some pointless statements to help with the scam.

So onto the evidence:


Impressive.  Shame no reference has been supplied so I can’t find out if this is true or not.

Then some even more impressive stats and figures – again no references.  I would particularly like to see the evidence that thermography is the ‘single most important indicator of high risk of breast cancer’.


Wait – there’s more:

5Another bold claim with no reference.  And then in classic SCAM fashion – here’s a load of bollocks about how long thermal imaging has been around for and ignored by the medical community:6Hippocrates thought it so it must be true.


Hippocrates also came up with the 4-humours theory to explain how the human body worked and he was wrong about that.

8So what? What has this got to do with the evidence for Thermography??

Mercola supply a nice explanation of how much screening costs the US government:


But then they seem to shoot themselves in the foot by saying:

10So at a cost of $150 per breast scan we would expect cost to the US government to be $9.75 billion dollars per year.


Based on the evidence supplied by Mercola, Thermography is:

Cheaper? NO

Safer? NO

More accurate? NO


And we couldn’t finish without a good conspiracy now could we?








A strange incident of open access science

I am currently taking part in the Reproducibility Project: Cancer Biology, a study run by the Open Science Framework which aims to investigate the replicability of the top 50 cancer biology studies published between 2010-2012.  Reproducibility is a key component of scientific progress and being able to replicate the findings of other peoples published work should be straight forward and consistent.

This post is not about the project but about a strange and disturbing anomaly I picked up on accessibility to the published paper I was working on.

The paper (‘The microRNA miR-34a inhibits prostate cancer stem cells and metastasis by directly repressing CD44‘) was published in Nature Medicine in 2011 and is freely available, in its entirety, in PubMed.  What’s odd though is that if you look at the PubMed entry for this paper there are two ‘link out’ buttons on the top right which are supposed to take you to the full paper – but only one of these is open access.

If you click the ‘Free in PMC’ link you can access the full paper and all the supplementary material.  If you click the ‘Nature Medicine’ link you are taken to a pay wall asking you to purchase the article for $32.

Worryingly, if you google search the paper title, the top hit is a link to the Nature Medicine paywall and not the free version in PMC.

This seems odd to me.  Is it devious or am I missing something important?  Has anyone else noticed this before?

Gene that makes cells ‘sticky’ could be the key to stopping spread of breast cancer

The clinical statistics for breast cancer paint a remarkably accurate picture of where research has got to and where it is going in the future.  You only need to look at two timelines of events to see real life impact of the disease and the breakthroughs that have shaped it.

The amount of new breast cancer cases diagnosed each year has been steadily increasing since the mid 1970’s – a stark reflection of an ageing population (cancer is an ageing disease after all) and a product of routine screening.  There are currently around 50,000 new breast cancer cases diagnosed each year and this is predicted to rise to 57,000 by 2025.  Conversely, the amount of people dying from breast cancer has been steadily decreasing since the 1990’s, a direct result of early disease detection and development of effective treatments.  Deaths from breast cancer have dropped nearly 39% since the mid 1980’s, but can we expect this decline to continue?

An estimated 12,000 women die of breast cancer each year in the UK and most if not all of these deaths can be attributed to the secondary form of the disease.  This is also known as metastatic breast cancer, when tumour cells spread to other tissues of the body, set up shop and seed a secondary tumour.  A tumour in the breast tissue is not likely to be lethal but a tumour in the brain, lungs or liver tells a different story entirely.  In fact, breast cancers confined to the breast have cure rates that exceed 90%, whereas metastasis to the brain can reduce survival rates to below 20%.  A lack of treatment options and no specific drugs for metastatic breast cancer means that unless all breast cancers are prevented from progressing then death rates can’t fall into a continual decline.

It is for this reason that cancer charities and researchers are turning their attention to understanding the process of metastasis so that new treatments and preventative strategies can be developed.  Metastasis is a complex process involving genetic and molecular changes to tumour cells at different stages of progression.  These changes help the cells to migrate away from the primary tumour, recruit blood vessels to aid their spread, invade biological tissue at a secondary site and survive there long enough to develop a new tumour.  Because there may be tumour cells at various stages of metastasis at any one time, these changes also make it extremely difficult to treat.

Researchers from the Breakthrough Breast Cancer Research Centre, housed at the Institute of Cancer Research in London, are one team pitching in to find out how and why secondary breast cancer occurs and what can be done to stop it.  In a paper recently published in Cancer Discovery, Professor Clare Isacke and her team describe a new gene that promotes the formation of secondary breast cancer, and reveal a potential new treatment that could block it.

The team first silenced over 1000 genes in mammary tumour cells before implanting them into mice.  They then waited until secondary tumours formed in the lungs of the mice before collecting tumour samples and analysing the cancer genomes.  The theory was that tumour cells with certain genes switched off would set up home in the lungs, and by assessing the genetics of these tumours, the researchers hoped to identify specific genes that facilitate breast cancer metastasis.

One of the genes that they identified was of particular interest because of a known role it plays in enabling metastatic cancer cells to stick to secondary tissues and seed new tumours.  The gene in question, called ‘ST6GalNAc2’ codes for a protein that alters the characteristic features of the cell surface.  The outer surface of a cell is embedded with molecules that dot the landscape like mountains and forests on the Earth’s surface.  The ST6GalNAc2 protein chops off a specific molecule from the surface of the cell and as a result, the cell is no longer able to pick up a second molecule called ‘galectin-3’, which is present in the fluid surrounding our cells.  With the ST6GalNac2 gene switched off, the cancer cells pick up lots of galectin-3 and become more likely to stick to areas like the lungs.

The researchers suggest that that the ST6GalNac2 gene could be used as a marker to select patients with a particular type of breast cancer that would benefit from treatment with a drug that prevents galectin-3 from making the tumour cells ‘sticky’.  Clinical trials have already shown these drugs to be safe and could provide a new treatment option for the prevention of secondary disease.





Making personalised medicine for cancer a reality

Judith Potts appeared in the Telegraph today discussing the future of cancer treatments.  Her article, on defining cancer by its molecular attributes as opposed to region in the body, highlights where research should be going.

It’s becoming clearer to scientists that lumping cancers as ‘breast’, ‘ovarian’ or ‘lung’ may not be useful when it comes to treating a patient.  Within each type of cancer there are many sub-types, which are all categorised depending on their molecular, genetic and physical characteristics.  But there is overlap between sub-types of one cancer, which blurs the boundaries that define them.  Cancers can also change during progression of the disease, masking their categorised features and developing new ones.

Cancers are also individual.  Tumours derive from the patient’s own cells and so each cancer is individually characterised by the genetic and environmental factors that have influenced that person’s life.   We know that certain genetic mutations are more likely to occur in certain cancers but the individuality of cancer means we can’t expect a blanket treatment for all patients with one type of cancer.

Judith proposes that molecular profiling for individual cancer patients is the way forward, and I am inclined to agree.  This method looks at a wide range of molecular markers that each represents a particular weakness in the cancer.  A clinician could then use this information to match up each weakness with a drug to exploit it.  This would be done on data gathered from a tumour of an individual patient, providing a clear road towards fully personalised medicine.  Clinicians could also get around the evolving cancer problem by taking new molecular profiles from the patient at different points in their disease and adapting their treatment accordingly.

This type of care for cancer patients is already available in some countries, if you can afford it, but there will be several problems to overcome when the technology becomes widespread.  Biotech companies that patent molecular profiling kits could increase costs.  Confidentiality around their product could also hide whether or not the molecular profiles are accurate.  This could lead to patients receiving an ineffective treatment or a treatment that causes harm.

There is also the issue that drugs approved for use for say lung cancer, may not have been tested against breast cancer.  Molecular profiling might tell you that a patient’s cancer has a weakness for that drug, but you would struggle to be able to give it to them.  And what about new experimental drugs?  With no evidence from clinical trials it wouldn’t be possible to use one even if you knew that the patients cancer would be sensitive to the treatment.  One way around this would be to reassess how clinical trials are set up and allow for greater flexibility in trial design (a discussion for another time).

Molecular profiling is on the horizon and offers obvious benefits to the way we treat cancer.   However, for it to work, the policy and healthcare system needs to evolve with the science.

Are carbs really the key to preventing brain disease?

The Times recent promotion of a new book by neurologist, David Perlmutter, raises some interesting questions t regarding the evidence base behind the books claims.  David’s book is called ‘Grain Brain: The Surprising Truth About Wheat, Carbs and Sugar – Your Brain’s Silent Killer’ and asserts that gluten consumed through wheat and grains are responsible for triggering brain disorders such as depression, dementia, schizophrenia, epilepsy, ADHD and decreased libido.  Let’s take a look at The Times piece and see if there is any merit in the claims.

David is quoted as saying; “The origin of brain disease such as dementia is predominantly dietary, he says, and the result of us consuming too many carbohydrates (particularly wheat-based bread and pasta as well as sugar) and too few healthy fats”

Straight off I think the word dementia has been misused here as (according to Alzheimer’s Society) dementia is an umbrella term used to describe the symptoms that occur when the brain is affected by certain diseases or conditions.  So dementia is not a brain disease but a symptom of a brain disease – the most common one being Alzheimer’s.

Now while there is some evidence for diet as a contributing factor to lifetime risk of Alzheimer’s disease it has in no way ever been concluded as the predominant factor.  Alzheimer’s Research UK and the NHS both state that age, family history, genetics, smoking and other diseases including  diabetes and obesity all increase lifetime risk of Alzheimer’s.  The only mention of ‘carbohydrates’ comes from a recommendation that people with diabetes need to control their blood glucose.

David continues to point out that; “Researchers have known for some time that the cornerstone of brain disorders is inflammation, he says. Gluten — consumed through wheat and other grains — and a high carbohydrate diet are among the most prominent stimulators of inflammatory pathways that reach the brain”

It would be hard to dispute that there is a link between inflammation and brain disorders such as Alzheimer’s, but to state that high carb diets are ‘the cornerstone of brain disorders’ because of potentially eliciting an inflammatory response seems a little far-fetched.  In fact, most people would argue that the greatest risk factor for Alzheimer’s is age.  Whether gluten and a high carbohydrate diet constitute a ‘prominent stimulator of inflammatory pathways that reach the brain’ is something I am not aware of, or know particularly much about, but I have yet to read anything to convince me.

Permultter argues that people should move onto a low-carb, high-fat diet, in order to protect themselves from brain disease.  To suggest nutrition is that simple is irresponsible.  Nutritional demands to sustain a healthy lifestyle are individual and increasing foods such as fats (suggested from cheese, meat, butter and eggs) could put some people at increased risk of other diseases.

It has been pointed out that very low carb diets can be a therapeutic tool for treating some neurological disorders.  However, it has been noted that ‘recommending a low-carb diet as an intervention for sick people is very different from promoting it as a preventative measure for the entire population, which is what Dr. Perlmutter does in Grain Brain’.

The truth is that we don’t know a great deal about the risk factors for brain disease, nor do we understand how they interact with each other or the level of risk each poses.  It has been stated by others more knowledgeable of this field than me that ‘which also suggest an element of blame towards the person with the condition, are unhelpful and do not do justice to the complexity of these diseases’.

Thanks to @_josephinejones for the article info