Animal testing (also known as animal experimentation or animal research) is the use of non-human animal models for research and development by academic institutes and commercial pharmaceutical companies. At this very moment, there are millions of animals being kept behind cages in solitude, waiting to be sacrificed in the name of science or industry. With the sharp rise in the use of animals in research, it has become an open debate as to whether or not we have an intrinsic right to use innocent animal lives when animal models don’t act as ideal study models as per the human body. Scientists use animals in biological and medical research more as a matter of tradition and not because animal research has particularly proved successful or better than other modes of experimentation. In fact, animal models have never been validated and the myth that animal models are necessary for biomedical research is unsupported by scientific literature. There is growing awareness of the limitations of animal research and its inability to make reliable predictions about human health. Major medical advances cannot always be attributed to experiments on animals. It is a known fact by scientists that animal models are flawed and imperfect approximations of the human body and human disease. Researchers from the Yale School of Medicine and several British universities published a paper in the British Medical Journal titled "Where Is the Evidence That Animal Research Benefits Humans?" (2004). The researchers systematically examined animal studies and concluded that “not much evidence is there” to support the idea that animal experimentation has benefited humans. In fact, many of the most important advances in health are attributable to human studies, including the discovery of the relationships between cholesterol and heart disease, smoking and cancer. Between 1900 and 2000 there has been an increase in life expectancy in the United States from 47 to 77 years (Utah department of health 2002). This is mainly attributed to improved nutrition, sanitation, education and other behavioral and environmental factors, rather than anything learned from animal experiments, although animal experimenters have tried to take credit for the vastly improved rate of life expectancy. Clinical research which is a multi-million dollar business investment by the commercial pharmaceutical companies for the development of drugs in humans always have preclinical trials conducted in animal models before safety is checked in humans, but the fact is that we already test new drugs on humans regardless of animal testing. No matter how many animal tests are undertaken, someone will always be the first human to be tested on and this is how volunteers are gathered for the phase one trials. As animal tests are so unreliable, they make those human trials all the more risky. The Food and Drug Administration (FDA) (Challenges and Opportunities Report 2004) has alluded to the fact that only 92 per cent of all drugs that are shown to be safe and effective in animal tests fail in human trials because they don’t work or are dangerous. Of the small percentage that is approved for human use, half are relabeled because of side effects that were not identified in animal trials. This happens because the animals used in the experiments have many differences to the human genome and are artificially introduced to a condition that they would never normally contract, thus keeping the animals in an unnatural and distressful environment and trying to apply the results to naturally occurring diseases in human beings is dubious at best. Physiological reactions to drugs vary enormously from species to species. HIV, for example, is deadly to humans but not to most laboratory animals. So studying HIV in other species may not produce results that are applicable to humans. Animal research is often bad science. Human-centered research invariably gets more accurate, effective and safe...
Please join StudyMode to read the full document