How SmithKline Beecham Makes Better Resource-Allocation Decisions
In 1993, SmithKline Beecham was spending more than half a billion dollars per year on R&D, the lifeblood of any pharmaceuticals company. Ever since the 1989 merger that created the company, however, SB believed that it had been spending too much time arguing about how to value its R&D projects—and not enough time figuring out how to make them more valuable. With more projects successfully reaching late-stage development, where the resource requirements are greatest, the demands for funding were growing. SB’s executives felt an acute need to rationalize their portfolio of development projects. The patent on its blockbuster drug Tagamet was about to expire, and the company was preparing for the impending squeeze: it had to meet current earnings targets and at the same time support the R&D that would create the company’s future revenue streams. The result was a “constrained-budget mentality” and a widely shared belief that SB’s problem was one of prioritizing development projects. Major resource-allocation decisions are never easy. For a company like SB, the problem is this: How do you make good decisions in a high-risk, technically complex business when the information you need to make those decisions comes largely from the project champions who are competing against one another for resources? A critical company process can become politicized when strong-willed, charismatic project leaders beat out their less competitive colleagues for resources. That in turn leads to the cynical view that your project is as good as the performance you can put on at funding time. What was the solution? Some within the company thought that SB needed a directive, top-down approach. But our experience told us that no single executive could possibly know enough about the dozens of highly complex projects being developed on three continents to call the shots effectively. In the past, SB had tried a variety of approaches. One involved long, intensive sessions of interrogating project champions and, in the end, setting priorities by a show of hands. Later that process evolved into a more sophisticated scoring system based on a project’s multiple attributes, such as commercial potential, technical risk, and investment requirements. Although the approach looked good on the surface, many people involved in it felt in the end that the company was following a kind of pseudoscience that lent an air of sophistication to fundamentally flawed data assessments and logic. The company had also been disappointed by a number of more quantitative approaches. It used a variety of valuation techniques, including projections of peak-year sales and five-year net present values. But even when all the project teams agreed to use the same approach—allowing SB to arrive at a numerical prioritization of projects—those of us involved in the process were still uncomfortable. There was no transparency to the valuation process, no way of knowing whether the quality of thinking behind the valuations was at all consistent. “Figures don’t lie,” said one cynical participant, “but liars can figure.” At the end of the day, we couldn’t escape the perception that decisions were driven by the advocacy skills of project champions—or made behind closed doors in a way that left many stakeholders in the process unpersuaded that the right road had been taken. As we set out in 1993 to design a better decision-making process, we knew we needed a good technical solution—that is, a valuation methodology that reflected the complexity and risk of our investments. At the same time, we needed a solution that would be credible to the organization. If we solved the technical problem alone, we might find the right direction, but we would fail to get anyone to follow. That is typically what happens as a result of good backroom analysis, however well intentioned and well executed it is. But solving the organizational...
Please join StudyMode to read the full document