The purpose of this paper is to provide an overview of commonly used statistical data in the measurement of corporate performance by examining various models and methods. Corporate performance can be defined as fundamental measures of organizational aptitude used to assess the “health” of the organization and to provide focused direction to operations while supporting managers. In order to measure the company’s intangible assets such as customer relationships, internal processes, and employee learning and development while aligning the corporation’s overall business strategy, statistical analysis of performance measurement data must be utilized. The task of evaluating the latest performance measures and aligning business strategy accordingly pose serious challenges to managers who must balance daily business demands with long-term strategic goals. The use of statistics to meet these goals and to design a comprehensive performance analysis is vital. For example, consider the research around employee held corporations (ESOP) such as Southwest Airlines. An analysis of the performance of such companies finds that that ESOP’s not only increase the organization’s ability to attract and retain talent but also increase sales by 2.3% - 2.4% per year over what would have been expected absent an ESOP1. The study used various statistical methods that will be examined later in this paper in more detail. Probability in Measuring Performance
As mentioned earlier, the process of performance measurement is often uncertain with many unexpected variants. It is largely dependent, therefore, on a valid probability assessment of uncontrollable events (or factors) and risk assessment of the organization. Probability is defined as the tool used in performance analysis for anticipating what the distribution of data should look like under a given model. There are some phenomena that may seem random or surprising. However, these events or pieces of data are not chaotic but rather display an order that emerges over time and is described by a distribution. One of the major concepts central to the usage of probability in models is the concept of variation. In order to derive a statistical conclusion, probability is needed, the nature of which is steeped in data distribution. Since probability is a tool to inspect the likelihood of an event there are various usages for probability in performance assessment as well as numerous tests and measures. “Probability density” depends on the measure that is set – that is, that there should be a model that wraps and maps the levels which are deemed as a dysfunction, success, etc. – set by realistic models of the company’s performance. 1 The National Center for Employee Ownership (NCEO), (2002) “Employee Ownership and Corporate Performance” Arsham2 mentions several methods to measure probability: -
1. The Classical Approach – Dependent on the condition that the outcomes of an experiment are equally likely to happen, which is not always possible in corporate performance analysis. A repeated pattern can, however, be isolated, which brings us to the next method. It can be defined as: P(X) = Number of favorable outcomes / Total number of possible outcomes 2. Relative Frequency Approach – Bases itself on repeating pattern of corporate behavior in the past. However, it assumes that events in the past are likely to occur again, which is not always true. For example, the downfall of the tourist industry was not expected before the September 11, 2001 attacks and therefore was not part of corporate performance analysis in the year before. Relative Frequency Approach is defined as: -
P(X) = Number of times an event occurred / Total number of opportunities for the event to occur. 3. Subjective Approach – Based on personal judgment and experience. Not used in corporate performance analysis. 2 Arsham, Hossein, “Business Statistics: Revealing Facts from Figures” 4. Anchoring - Adjusting historical values with future...
Please join StudyMode to read the full document