AU G U S T 2 013
b u s i n e s s
t e c h n o l o g y
p r a c t i c e
Enhancing the efficiency and effectiveness of application development Software has become critical for most large enterprises. They should adopt a reliable output metric that is integrated with the process for gathering application requirements.
Michael Huskins, James Kaplan, and Krish Krishnakanthan
Most large companies invest heavily in appli cation development, and they do so for a compelling reason: their future might depend on it. Software spending in the United States jumped from 32 percent of total IT corporate investment in 1990 to almost 60 percent in
the question: how much software functionality did a team deliver in a given time period? Or, put another way, how productive was the applicationdevelopment group?
1 “Private fixed investment
20111 as software gradually became critical for almost every company’s performance. 2
With big money and possibly the company’s competitiveness at stake, why do many applica tion-development organizations fly blind without a metric in place to measure productivity? First, with every metric comes some level of overhead to calculate and track that metric. With some metrics, the overhead has proved larger than the benefits afforded by them. The
in equipment and software by type,” table group 5.5.5, Concepts and Methods of the US National Income and Product Accounts, US Bureau of Economic Analysis, November 2011. 2For more information, see Hugo Sarrazin and Johnson Sikes, “Competing in a digital world: Four lessons from the software industry,” McKinsey on Business Technology, Number 28, Winter 2012, mckinsey.com.
Yet in our experience, few organizations have a viable means of measuring the output of their applicationdevelopment projects. Instead, they rely on inputbased metrics, such as the hourly cost of developers, variance to budget, or percent of delivery dates achieved. Although these metrics are useful because they indicate the level of effort that goes into application development, these metrics do not truly answer
Few organizations have a way to measure the output of their applicationdevelopment projects. We believe the best solution is to combine use cases with use-case-point metrics. Use cases describe the users who interact with the application—and how the application interacts with them. Use-case points, which are based on data captured from these interactions, can be calculated in less than a day, even for large projects, and refined as more requirements are specified. Organizations that have successfully adopted this approach have started with a pilot involving several teams. What is crucial for the acceptance of these metrics is how leadership uses them; if they’re only used to reward or penalize developers, serious resistance is likely.
second reason is that in many application development organizations there is a lack of standardized practices for calculating metrics. For example, it is difficult to deploy output measurements if application teams are following different approaches to capturing functional and technical requirements for their projects. Finally, and perhaps most important, there is often a certain amount of resistance from application developers themselves. Highly skilled IT professionals do not necessarily enjoy being measured or held accountable to a productivity metric, especially if they feel that the metric does not equitably take into account relevant differences among develop ment projects. As a result, many organizations believe there is no viable productivity metric that can address all of these objections. Although all outputbased metrics have their pros and cons and can be challenging to im plement, we believe the best solution to this problem is to combine use cases (UCs)—a method for gathering requirements for appli cationdevelopment projects—with usecase points (UCPs), an...
Please join StudyMode to read the full document