Evaluation permits the critical question to be asked and answered: have the goals and objectives of new curriculum have been met? It assesses individual achievement to satisfy external requirements, and provides information that can be used to improve curriculum, and to document accomplishments or failures. Evaluation can provide feedback and motivation for continued improvement for learners, faculty, and innovative curriculum developers. To ensure that important questions are answered and relevant needs met, it is necessary to be methodical in designing a process of evaluation.
In the last decade, we have observed the rapid evolution of assessment methods used in medical education from the traditional ones towards more sophisticated evaluation strategies. Single methods were replaced by multiple methods, and paper-and-pencil tests were replaced by computerized tests. The normative pass/fail decisions moved to assessment standards, and the assessment of knowledge has been replaced by the assessment of competence. Efforts have been also made to standardize subjective judgments, to develop a set of performance standards, to generate assessment evidence from multiple sources, and to replace the search for knowledge with the search for "reflection in action" in a working environment. Assessment tools such as the objective structured clinical examination (OSCE), the portfolio approach, and hi-tech simulations are examples of the new measurement tools. The introduction of these new assessment methods and results obtained has had a system-wide effect on medical education and the medical profession in general. The commonly used slogan that "assessment drives learning", although certainly true, presents a rather limiting concept. It was therefore suggested that it should be replaced by an alternative motto: "assessment expands professional horizons" (M. Friedman, 2000). This stresses an important role of assessment in developing multiple dimensions of the medical profession.
Recent developments of so-called "quantified tests", standardized patient examinations, computer case simulations, and the present focus on the quality of the assessment evidence and the use of relevant research information to validate the preferred assessment approaches have been impressive, initiating the birth of Best Evidence-Based Assessment (BEBA). However, the problem is that such performance-based assessments consume resources and require a high level of technology. They are not readily applied in developing countries or even in most developed ones, due to their expense and logistical problems.
Therefore, we cannot forget the value and of the importance of all assessment methods, which recognize the primacy of evaluations by teachers and supervisors in the real health care environment. This so-called "descriptive evaluation" which uses words to describe and summarize a student's level of competence is in contrast to quantitative assessment techniques whose summary of achievements yields a score, typically a number. This is an area where the summative faculty judgments are necessary, but certainly not sufficient to pronounce a student as competent, and should be supplemented by the quantified assessment methods of professional performance. "Objective" vs. "Traditional" Methods of Evaluation
Most educators would accept that prolonged periods of observation of students working with patients on a regular basis would have more validity than most assessment tests of clinical competence. The problem is that we strive to achieve reliability and precision in these observations as a requirement for a valid assessment. It is optimal to represent an evaluation of a spectrum of skills, including the cognitive ability to know what information is worth remembering, personal skill to manage one's time successfully, and a commitment to self-directed learning.