Ethics, Deception, and ‘Those Milgram Experiments’
C. D. HERRERA
Critics who allege that deception in psychology experiments is unjustiﬁed frequently cite Stanley Milgram’s ‘obedience experiments’ as evidence. These critics say that arguments for justiﬁcation tend to downplay the risks involved and overstate the beneﬁts from such research. Milgram, they add, committed both sins. Critics are right to point out that research oversight is often susceptible to self-serving abuse. But stating a priori how beneﬁcial a given experiment will be is a tall order for psychologists, or anyone else. At the same time, critics themselves have difﬁculty in showing what is wrong with deception, and how subjects in these experiments suffer. Hence, it becomes unclear what the psychologists, including Milgram, are prone to downplay. There is also room to wonder how the Milgram studies can illuminate the debate over deception. Although Milgram probably exaggerated the scientiﬁc signiﬁcance of his own work, critics who exaggerate its moral and historical signiﬁcance do little to clarify the status of deception.
Rethinking the Beneﬁts of ‘Justiﬁed’ Deception What are we to make of that unique practice associated with some psychology experiments, the intentional deception of the research subjects? Psychologists argue that they are not using malicious or garden-variety deception, but deception of the ‘justiﬁed’ kind. They are quick to assure critics that these subjects will endure minimal risks, if any, while participating. Indeed, some give the impression that there is too much fuss over deception: many of the ethical sermons being preached to social scientists seem to assume that those participating in research projects would never encounter given discomforts if they did not participate in the research . . . deceptive information is presented at every turn, particularly in advertising and political speeches . . . If a salesman deliberately deceives a prospective customer, he makes no attempt, after the sale, to reveal this deception. If social scientists were no so honest, subjects would not be aware of the deception and, hence, not so upset about their treatment.  Psychologists, at least the few who resort to deception, claim further that they must conceal some details of a proposed study from prospective subjects. A fully informed subject will be a ‘reactive’ one, the thinking goes. It is hard enough to observe natural behaviour in a campus laboratory; why add to the challenge by letting subjects know what the psychologists are up to? Whole areas of human behaviour would supposedly be off limits to research if psychologists had to be completely open and honest when they seek volunteers. © Society for Applied Philosophy, 2001, Blackwell Publishers, 108 Cowley Road, Oxford, OX4 1JF, UK and 350 Main Street, for Applied Philosophy, 2001 Malden, MA 02148, USA.
C. D. Herrera
Critics remain unconvinced by this appeal to research needs, and by the claims about deception being innocuous. For some, the trouble starts with the way the psychologists give an accounting of their work. Although speciﬁc procedures vary across nationalities and institutions, as a general rule it falls to something like an Institutional Review Board to evaluate the psychologist’s promise of beneﬁt over risk in these experiments. Psychologists offer a risk-beneﬁt projection to the Review Board, and if they can win the Board members over, they then try to convince prospective subjects with roughly the same projection. It is probably true that this arrangement forces psychologists to pull off a bit of a public-relations victory. If they cannot convince the Review Board to accept the picture of risks and beneﬁts, the process comes to a halt, and researcher never meets subject. Once they pass review, the psychologists still have to get...