When it comes to the world of medical sciences, there is rarely certainty. There is always room for reaction, incorrect data gathering, or contamination when testing. Can this uncertainty be placed on human lives? These uncertainties are placed on what medical science has to thank most: animals. Animals have been irreplaceable in the processes of mapping genomes, transplanting organs, and ridding humans and animals of diseases and disorders. Many call it immoral, but it is necessary and only beneficial. Medical research has saved or improved the lives of millions of people, and animals. Today’s medicines and surgical techniques could not have been developed without research into how the body works, and how it reacts to procedures and substances - the results of animal research programs taking place in hospitals, universities, and research centers all over the world. These advances on behalf of animal testing have been applied to human health for years.
The history of animal testing is a long and interesting one. It is believed that animal experimentation began with Greek physician/scientists; Aristotle and Erasistratus being among them. Physicians in Rome such as Galen, known as “the father of vivisection”, followed suit. Later physicians of the Islamic Golden Age used animal testing to further human anatomy studies. Ibn Zuhr practiced surgical techniques on animals before performing them on humans. Observations and dissections of modern medical sciences first took place in the 17th century. English physician William Harvey used animals to study the circulatory system. In the 18th century, Antoine Lavoisier first used guinea pigs in experiments in respiration. Otto Loewi, Edgar Adrian, and David Hubel made advances in neurology and the study of vision. These basic scientific advancements gave way to many more in medicine. Yet, these advances and their respective testing are under major scrutiny. These debates over the rights of animals can be divided into four broad phases. The first started in the 1860s and lasted until World War I. During this period, animal research became an important method of investigation and also as a significant source of public controversy. For a variety of reasons, the public found the idea of intentionally inflicting harm on animals in order to learn more about health and medicine particularly disturbing. World-wide, opposition to the use of animals in research peaked in the last two decades of the 19th century and then began to decline. After World War I, the animal research issue became marginalized and of relatively little consequence for politicians and policy makers. The short, second phase of the animal research debate lasted from around 1920 to 1950. During this period, animal research continued to develop as a means of discovering new biological information and as a route to possible cures; the discovery of insulin is an example of the benefit of animal research. Opposition to the practice was sporadic and of little public impact, despite the support of such powerful individuals as William Randolph Hearst ,owner of many American media companies, who promoted an anti-vivisection agenda through his many newspapers. The third phase of the animal research debate started around 1950 and continued to around 1975. After World War II, the governments of most developed countries became major sponsors of scientific research, including biomedical research. For example, the budget of the National Institutes of Health (NIH) grew dramatically and has continued to grow at almost 10% a year in constant dollars, with a few minor periods of cuts, up to the present time. This growth led to a huge expansion in publicly funded research. In the private sector, the discovery of penicillin and streptomycin led to an explosion of pharmaceutical research and in the size of the prescription drug industry. These expansions in government funding for biomedical research and in private-sector investment in drug...
Please join StudyMode to read the full document