This essay discusses the apparently logical proposition that if risk can be identified and controlled, industrial disasters are preventable. It first examines the concepts of ‘risk’, ‘identification and control’, ‘disaster’ and ‘preventable’ before examining the nature of the industrial disaster through a systems approach; it will be shown that a disaster can be deconstructed in order to present a series of ‘hooks’ on which preventative action could be taken. However, the nature of the system and organizational culture in which it operates prohibits those lessons from being applied. Furthermore, not only are there limits to lessons, or isomorphic learning, the nature of industrial technology is such that accidents are unavoidable. The essay concludes by arguing that the efforts of risk managers should not be focused so much on preventing disasters, but on withstanding them.
There is ‘no clear and commonly agreed definition of what the term “risk” actually means.’ (Hood and Jones, 1996: 2). The 1992 Royal Society Report outlines the debate over whether risk should be analyzed using quantitative or qualitative methods. Favoured by engineers and mathematicians, risk may be understood quantitatively as ‘a combination of the probability, or frequency, of occurrence of a defined hazard and the magnitude of the consequences of the occurrence.’ (Warner, 1992: 4). However, this approach has been criticized for being too narrow because it ‘imposes unduly restrictive assumptions about what is an essentially human and social phenomenon.’ (Pidgeon et al., 1992: 89). Rather, one should understand the ‘risk’ archipelago as being comprised of technological, social and natural hazards, which overlap and create hybrid hazards, to be understood through the multiple sub-disciplines of study (Hood et al., 1992; Hood and Jones, 1996). To that end risk is better understood as a social-technical phenomenon through broader systems, organizational and cultural modes of study. Indeed, engineers now promote the requirements of a ‘clear appreciation of both technical components and social aspects of risk in order to manage that risk successfully.’ (Royal Academy of Engineering, 2003).
The terms ‘identified’ and ‘controlled’ should be understood as being part of the risk management spectrum. This term means different things to different people depending on their background, but if our understanding of ‘risk’ is broadened, then so it should be of risk management. Accordingly, this paper understands identification and control as ‘regulatory measures’ intended to ‘shape the development of [and] …cop[e] with risk’ (Hood and Jones, 1996).
The meaning of the term ‘disaster’ is also subject to much debate. The nature of disasters is discussed below, suffice to say that when a system breaks down, it is the catastrophic effects of which we refer to as a disaster (Dombrowsky, 1995). Industrial disasters are understood as those which are techno-centric and but have human interaction and occur in a plant or factory setting (Richardson, 1994).
Deconstructing the nature of socio-technical disasters through a systems approach would suggest that the root causes of disasters, and thus the associated risks, can be identified when one examines the human and technical system interaction. Barry Turner’s disaster incubation model suggests that disasters develop over several stages: during the ‘incubation period’, risks are misunderstood and warning signs ignored, a precipitating event then triggers the onset of disaster. Turner argues that using this systems model, risk managers should be prompted to look for latent risks or failures, taking into account human and technical errors (Turner  cited in Module 1, Unit 2: 2.2). Other theorists provide modelling: Bill Richardson profiles socio-technical disasters and discusses planning tools...