Are Performance Improvement
Professionals Measurably Improving
Performance? What PIJ and PIQ Have
to Say About the Current Use of
Evaluation and Measurement in the
Field of Performance Improvement
´pez and Hillary N. Leigh
he ability to prove that performance improvement professionals have made a measurable contribution to their clients and the ﬁeld remains uncertain (Kaufman & Clark, 1999). Clark and Estes (2000) noted that highly regarded research
groups who surveyed performance improvement solutions found ‘‘a huge gap between what we think we accomplish and what scientiﬁc analyses say we accomplished’’ (p. 48). Here are some of the ﬁndings cited by Clark and Estes (2000) from the work of the National
Academic of Sciences and the National Research
Council and other independent research groups:
Measurement and evaluation are
at the core of reliably improving performance. It is through these central mechanisms that performance improvement professionals are able to demonstrate the true worth of their
efforts. However, the true value of the
contributions they make is inconclusive. This article presents a content analysis of 10 years’ worth of Performance Improvement and Performance Improvement Quarterly articles as an
initial data point to be used for professional reﬂection and further exploration into the intentions and practices of performance improvement practitioners.
Scientiﬁc studies of training found training interventions often leave participants worse off than before the training intervention (more confused, less able to remember important information, less able to use their work-related knowledge effectively).
More than half of organizational change initiatives are quickly abandoned.
Kirkpatrick’s level one evaluation, the most commonly used method for evaluation, often gives about as much inaccurate information as it does accurate information, including the perception that the object of evaluation has helped, when if fact it has done quite the contrary.
Studies show that employee empowerment strategies have minimal success in some organizations, and negative consequences in others.
PERFORMANCE IMPROVEMENT QUARTERLY, 22(2) PP. 97–110
& 2009 International Society for Performance Improvement
Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/piq.20056
The more rigorous the evaluation, the less likely one is to ﬁnd evidence of success.
Myriad studies have found no evidence that multimedia, Internet, and intranet training produce additional learning beneﬁts beyond those already furnished by traditional media such as human trainers or manuals.
Studies indicate that one-third of the feedback strategies employed in our ﬁeld do not improve performance, and another third make performance worse.
Experiments that check for transfer of performance solutions show that even though they work once, they almost never work in other organizational contexts. Because we do not evaluate solutions that may have worked for someone else in another organizational context, we remain ignorant of this failure to transfer.
Successful performance improvement strategies do exist; however, they are seldom integrated into our most popular performance solutions.
Clark and Estes (2000) also argue that performance improvement professionals tend to ‘‘scientize’’ craft solutions by citing research and evaluation that is often irrelevant or poorly designed. This could suggest a number of things, chieﬂy that (1) performance improvement professionals do not know how to integrate appropriate research and evaluation practices and ﬁndings into their work, or (2) they do not want to integrate appropriate research and evaluation, or (3) they are unaware of the importance of integrating appropriate research and evaluation practices into their work. This challenge is also faced by other ﬁelds closely related to performance...
Please join StudyMode to read the full document