Training Methods

Only available on StudyMode
  • Download(s) : 421
  • Published : September 3, 2010
Open Document
Text Preview
Developments in Business Simulations and Experiential Learning, Volume 32, 2005

DEVELOPING MANAGERIAL EFFECTIVENESS: ASSESSING AND COMPARING THE IMPACT OF DEVELOPMENT PROGRAMMES USING A MANAGEMENT SIMULATION OR A MANAGEMENT GAME John Kenworthy Managing Director, Corporate Edge Asia johnk@ce-asia.com Annie Wong Director Corporate Edge Asia anniew@ce-asia.com ABSTRACT

This research evaluates the effectiveness of using a management simulation, a management game or case studies within a strategic management training programme. The literature suggests that there is anecdotal evidence that both simulations and games surpass the use of case studies, but there is much criticism of the lack of robust research models used to validate the claims. Using a quasi-experimental design with a reliable managerial competency assessment instrument, the authors assess the impact of different programme groups, the assessed change in workplace behaviour on a 180° basis and participant learning as demonstrated to their own senior managers.

BACKGROUND AND CONTEXT
A large amount of business gaming literature has dealt with how its method fared against the traditional methods for delivering course material (Keys & Wolfe, 1990). For example, the studies by Kaufman (1976), McKenney (1962, 1963), Raia (1966) and Wolfe and Guth (1975) found superior results for game-based groups versus case groups either in course grades, performance on concepts, examinations, or goal-setting exercises. Although anecdotal evidence suggests that students seem to prefer games over other, more traditional methods of instruction, reviews have reported mixed results. Despite the extensive literature, many of the claims and counterclaims for the teaching power of business games and simulations rest on anecdotal materials or inadequate or poorly implemented research (Gredler, 1996). As reviewed by Keys and Wolfe (1990), these research defects have clouded the business gaming literature and hampered the creation of a cumulative stream of research. Much of the reason for the inability to make supportable claims about the efficacy of simulations can be traced to poorly designed studies, the lack of generally accepted research taxonomy, and no well defined constructs with which to assess learning outcomes (Feinstein & Cannon, 2001; Gosenpud, 1990). As highlighted by Sales and Cannon-Bowers (2001), there is a somewhat misleading conclusion that simulation (in and of itself) leads to learning; unfortunately, most of the evaluations rely on trainee reaction data and not on performance or learning data. There are also such a variety of stimuli (e.g., teacher attitudes, student values, the teacher-student relationship) in the complex environment of a game that it is difficult to determine the exact stimuli to which learners are responding (Keys, 1977). Gosen and Washbush (2004) pointed out that although it seems appropriate to undertake research assessing the value of simulations, the majority of early studies have focused on performance in the simulation (including aptitude scores in the form of SATs, grades, and other

INTRODUCTION
The use of computer-based simulations has received attention more recently for both their increasingly sophisticated design and their promotion of participant interest (Mitchell, 2004). However, one of the major problems of simulations is how to “evaluate the training effectiveness [of a simulation]” (Feinstein & Cannon, 2002) citing (Hays & Singer, 1989). Although for more than 40 years, researchers have lauded the benefits of simulation (Wolfe & Crookall, 1998), very few of these claims are supported with substantial research (Butler, Markulis, & Strang, 1988; Miles, Biggs, & Schubert, 1986). Many of the above cited authors attribute the lack of progress in simulation evaluation to poorly designed studies and the difficulties inherent in creating an acceptable methodology of evaluation. This paper is from an on-going research study comparing...
tracking img