Beginning in the 1970s, two developments dramatically changed in employee selections. First, the development of meta-analysis, arguably one of the most influential methodological developments in recent decades, made it possible to cumulate quantitatively the results of large numbers of small-scale studies, resulting in a quasi-massive-scale study. Second, the results of large-scale studies of military personnel and others also became available. The results of both kinds of studies provided strong evidence of remarkably general validity for cognitive ability tests for selection across a broad range of jobs. Given this state of affairs, it is not surprising that some have argued for near universal use of cognitive ability tests as the primary selection tool. In addition to the positive results from meta-analytic and large-scale predictive-validity studies, cognitive ability tests are remarkably practical. After 85 years of research, cognitive ability tests are among the most reliable measures available to social scientists. Also, unlike selection tools such as checking references or evaluating prior performance, cognitive ability tests can be given to individuals who are new to the job market. Despite these strengths, others have argued that it is important to look beyond general cognitive ability if one is to understand why people achieve to the extent that they do on the job. The most important issue in HR selection testing is determining a test's validity. The actual definition of validity can vary depending on the circumstances, the specific tools used, and the application. For most selection purposes, however, a selection test is valid if the characteristic(s) it is measuring is related to the requirements and/or some important aspect test is valid, and a test is valid if there is a link between the test score and job performance. The degree to which an employment selection test has validity tells the testing entity what it can conclude or predict about someone's job performance from his or her test scores. A test's validity is established for a specific purpose, and it may not be valid for purposes other than those that it has been validated to measure. Criterion-related validity is the correlation or other statistical relationship between selection test score (the predictor) and job performance (the criterion). If those who score low on a test also perform poorly (and visa versa), the test is said to have high criterion-related validity. Content-related validation is a demonstration that the content of the test reflects important job-related behaviors and measures important job-related knowledge or skills. Construct-related validity is evidence that a test measures the constructs or abstract characteristics that are important to successful performance of the job. For psychological tests used in selection, a test's criterion-related validity is usually the variable of interest to researchers, and it is the validity coefficient--the actual correlation coefficient between a test score and some job performance criterion--that is referred to when validity is discussed in HR literature. Having evidence of the validity of selection tests is essential for any organization using such tools. Collecting these data is the principal way companies demonstrate that they have met the Uniform Guidelines' requirements should hiring procedures result in adverse impact (i.e., disproportionate hiring outcomes) against protected groups. Many experts and personnel selection specialists believe that test validity can be attenuated or even sacrificed to reduce adverse impact. Often, a practitioner is faced with a choice among tests having very different costs, degrees of validity, and fairness. The Uniform Guidelines provide guidance on making such choices: When two procedures are available that are valid and reliable and that serve the company's interest in efficient and trustworthy workmanship, the company should use the procedure that...
References: Barrett, G. V., Alexander, R. A., & Doverspike, D. (1992). The implications for personnel selection of
apparent declines in predictive validity over time: A critique of Hulin, Henry, and Noon
David I. George, Mike C. Smith (1990); Public Personnel Management, Vol. 19,
Schmit, Mark J
validity. B., Journal of Applied Psychology, 0021-9010, Vol. 80, Issue 5
Muchinsky, P.(2006). Psychology Applied to Work.(8th ed.).(pp. 99). California: Thomas Wadsworth
Please join StudyMode to read the full document