Ask the nearest person in the lab something about their views on the use of animals in science and you may well find that they look slightly uncomfortable and find themselves in a strangely sudden need of a coffee or similar. It’s not exactly what one might think of as a comfortable topic; indeed, it represents undoubtedly one of the most polarising and argument-provoking topics in science. What better area to choose to write about in this rather long-overdue blog??
(P.s. unlike my previous posts, I will avoid any Donald Trump-related jokes. Somehow it is less amusing after it actually happened than before… However, I do suggest you give this a read: http://www.nature.com/news/meet-the-scientists-affected-by-trump-s-immigration-ban-1.21389)
Personally, it’s an area I’ve been exposed to in two different ways. Firstly, whilst my own research has not – and will not – involve animals, others in my lab have done a fair degree of animal-based work. Secondly, my walk to work used to take me past a group of animals-in-science protesters every day. It’s probably not incorrect to suggest, therefore, that I’ve seen both sides of the coin. It definitely is incorrect to suggest that I have any sort of authority to give a definitive “right or wrong” answer – no-one does – but I would say my own views are at the least moderately well informed, largely thanks to a more-than-passing interest motivated by the above. I’ll discuss this topic primarily in relation to its main application – drug development for disease treatment – but, as mentioned in passing later, this isn’t the extent of animal-based research.
Furthermore, I will note at this early point that the obvious key argument against animal testing is it’s inhumane nature. This is something that everyone will have their views on, but even staunch supporters of animals in science would surely struggle to put forward an argument that it is completely humane and benign. Rather, the question is whether such an issue is, or isn’t, outweighed by clear scientific and societal benefits, and it is this area I hope to focus on here.
Proponents of the concept of animal testing will regularly cite the number of key medical breakthroughs that have relied on, or “would not have come about without”, animal-based research. The eradication of smallpox is a common example. Chimpanzees were central in the development of a Hepatitis B vaccine. The isolation and understanding of insulin, used in treatment of diabetes, also relied on animal studies. More generally, animal testing is a mandatory part of preclinical trials for any prospective drug (at least in the UK, to the best of my knowledge). Whether such discoveries would actually not have happened without animal research is debatable (and sadly the ability to rewind time and change history is not currently feasible outside of the realms of Harry Potter), but it is undeniable that it played a key part.
Indeed, in the context of drug discovery and testing, animal testing currently acts as a key stepping stone in the pathway from initial in vitro tests to human clinical trials. More specifically, the somewhat simplified pathway might go along the lines of “in vitro -> in vivo (isolated cells / tissues / maybe organs) -> in vivo (animals) -> in vivo (humans)”. The logic is that animals used in such testing are genetically similar enough to humans that results obtained from such studies should be relatively translatable to humans. Mice, commonly utilised as model systems, share >90% of human DNA sequence with humans. This said, we do share 60% or so of our DNA sequence with bananas, which puts that 90% value in context somewhat. Would you consider yourself, truly, part-banana? Maybe you would, I don’t know. Would you run a clinical trial on a banana? I sincerely hope not.
DNA is an incredibly large, complex molecule. Even a 10% difference corresponds to an awful lot of genetic material being different. So do take such phrases with a degree of caution. Especially, bear in mind that DNA isn’t the be all and end all with respect to organism similarity; for example, mice are rather smaller than we are, exhibit rather different metabolism profiles, and perhaps most of all, the phenotypic differences between organisms are not defined solely by DNA sequence (an identical DNA sequence may, for example, be regulated in very different ways with regards to production of downstream products that are more directly responsible for such phenotypic differences: proteins, in particular).
With regards to drug development, animal testing is typically a mandatory component carried out prior to phase 1 clinical trials. Its function is, above all, to assess the safety of the drug before it goes into humans, and presumably also to demonstrate some level of efficacy in a full organism. If we wanted to assess how justified this is, one could do worse than look at how effective it is. Quoting directly from http://www.understandinganimalresearch.org.uk, “If you want to know how truly successful animal tests are, consider that in over 30 years there has not been a single death in a Phase 1 clinical trial in the UK“. However, one might consider the counter-argument: a huge proportion of drugs that make it past animal trials fail to be approved for use in humans, around 92% (http://www.neavs.org/research/limitations). Perhaps animals aren’t such a great model after all. Yet one can counter this counter-argument with the knowledge that most drugs that do fail do so in phases 2 or 3, i.e. after they have already been tested in humans. So it’s a little hard to say whether it is animals themselves that are a poor model for humans, or whether a small group of humans (phase 1) is a poor model for a larger group (phases 2-3). I’ll let you decide, but you can see it’s hardly clear-cut either way.
I mentioned above that a 10% difference in DNA between organisms is still a substantial difference. Not least, numbers such as these may represent differences between entire DNA, not DNA segments encoding functional products (genes). Herein lies something of a paradox. Organisms such as chimpanzees share even more of their DNA with humans (98% or so) and thus, it could be argued, represent still more suitable and accurate model systems. Yet the very reason we don’t use humans for safety testing is obviously on ethical grounds. As animals get increasingly similar to humans, they become better models, precisely because they become more “human-like”; this in turn also means that it is ethically (still) more challenging to utilise them in testing. Of course, some will argue that all animals are equal and thus there is no difference from mice, to chimpanzee, to human. Indeed, the whole use of animals in research in lieu of humans surely stems from a fundamental mentality of “animals are inferior to / matter less than humans” (their lives are less valuable). That is, frankly, a topic for an entire article in itself – but clearly, one’s views on this greatly inform one’s views on animal-based research; the two are inextricably linked.
One particular area of medical research where it is hard to argue that the use of animals is not advantageous in any way is that relating to genetic disorders, i.e. diseases passed on through inheritance. Simply put, animals such as mice, rats etc. have a far shorter lifespan than do humans. Studying such processes relies on the ability to observe genetic propagation of disease (or not), i.e. the ability to examine multiple generations of the species. For humans (and certain other longer-lived animals), this could take centuries! Try motivating a PhD student to start work on a century-long experiment. It might be tough.
What is becoming increasingly apparent, however, is the emergence and existence of alternatives to animal studies. Sophisticated human-patient simulators, increasingly complex computational models, microchips allowing construction of functional minute organs made from real human cells and tissues – these are just some of the technologies holding the promise to perhaps eventually replace animals in research. Skin irritation, for example, can be – and has been – tested using MatTek’s “Epiderm TM” (see below), vaccines can be and have been tested using human tissues. Is the use of animals in drug development increasingly archaic? Not yet perhaps. It will take many years and huge quantities of (supportive) data for e.g. computational or simulator-based systems to become accepted as genuine, reliable alternatives to animals. But it would be foolish to ignore the fact that alternatives do exist; even moreso to pursue animal testing in the event of significant evidence arising that such more humane alternative options may be every bit as, if not more, effective. Science may find itself in need of a leap of faith.
I find myself very much on the fence with regards to the entire issue. I cannot deny that scientific benefits have undoubtedly emerged from animal-based research. However, any notion that animals represent a highly accurate model system, or that there aren’t more humane alternatives out there, seems (increasingly) incorrect. Personally, I tolerate animal studies when viable alternatives don’t exist. If or when they do, especially in light of the limitations of animal systems at present, then such tolerance might be rather decreased.
But one thing I don’t think can be, or, rather, should be disputed is that animal-based experiments should be well-thought-out, well-designed, and, ultimately, their absolute necessity must be justifiable. This might seem common sense to some, yet I have come across a number of cases where this has, sadly, not been the case. Tentatively promising in vitro test results surely does not give legitimacy to go straight for animal testing without, at the very least, some in cellulo studies, yet the “it sort of works in a test tube, great, let’s stick it in a mouse/monkey” mentality definitely exists (and is a mentality I have encountered at rather closer quarters than I’d like). I’ve seen people describe injecting things into mice, based on limited in vitro results, that frankly I’d feel uncomfortable injecting into bacteria, and to what end I am still uncertain. This sort of approach isn’t exactly going to cool the fire of animal rights activists, and nor should it.
Animal experiments should be, if needed, the very final stage, where one is certain that the system works in vitro and in vivo (cells/tissues), and one is certain that carrying out the experiments in animals would bring real and substantial scientific benefit and insight that could not be obtained otherwise. In some scientific papers there seems to me at least to be no real reason to carry out animal experiments other than “it’d look kind of cool to do this in an animal”. Sometimes, animal research is – at present – arguably necessary to further scientific understanding. It has, and undoubtedly will continue to, aid scientific discovery immensely. But don’t be flippant about it; these are living creatures too.