An Analysis of Predictive Validity at Colleges

Only available on StudyMode
  • Download(s) : 95
  • Published : August 29, 2012
Open Document
Text Preview
Is NSSE Messy? An Analysis of Predictive Validity
David DiRamio and David Shannon

Paper presentation for the Annual Meeting of the American Educational Research Association April 10, 2011 in New Orleans, LA

Contact Information: David DiRamio, Ph.D. Associate Professor Administration of Higher Education 4096 Haley Center Auburn University, AL 36849 Office (334) 844-3065 E-mail: David Shannon, Ph.D. Professor Educational Psychology 4028 Haley Center Auburn University, AL 36849 Office (334) 844-3071 E-mail:

Running Head: Is NSSE Messy? 4‐5‐11 

Is NSSE Messy? An Analysis of Predictive Validity Each year at colleges and universities across the nation, senior administrators and governance officials (i.e. trustees) gather to hear an annual report, typically presented by institutional research staff, detailing how their own school scored on the National Survey of Student Engagement (NSSE). The venerable NSSE, with its five benchmarks and peer comparisons, provides empirical evidence about participation in the activities and programs that institutions provide for students’ learning and development. The individual annual NSSE report will detail for stakeholders how well (or poorly) the institution is engaging its students. The premise that student engagement is a proximal measure, or “proxy,” for learning is well supported in the literature (Angelo & Cross, 1993; Astin, 1993; Kuh, Kinzie, Schuh, & Whitt, 2005). NSSE is highly regarded as an instrument for measuring factors pointing to student learning and success. However, do NSSE scores actually correlate with academic success outcomes that are on the minds of senior officials? Others have investigated similar issues related to NSSE (Gordon, Ludlum, & Hoey, 2008; Pascarella & Seifert, 2008; Swerdzewski, Miller, & Mitchell, 2007), but ours is a different approach. In this study we investigated the relationship between NSSE and two of the outcomes measures that typically concern campus policy makers. The “messiness” resides in the idea that higher NSSE scores and more student engagement could, ironically, be counterproductive to some of the outcomes that governance officials and senior administrators are concerned with in today’s environment of increased accountability. In other words, a trustee could be listening to a glowing report about how well her institution is doing in terms of student engagement, not realizing that high NSSE scores might actually be negatively associated with outcomes that she considers desirable for the institution and its students. After all, doesn’t engagement take away



Running Head: Is NSSE Messy? 4‐5‐11 

from the “time on task” that a student would normally devote to coursework and studying, thus presenting the possibility of negatively impacting a student’s grades and prolonging the length of time he or she stays at the institution? The purpose of this study is to investigate predictive validity and other statistical associations between the NSSE instrument and two commonly regarded student outcomes measures: time to graduation and grade point average (GPA). The objective is to establish an empirical link (whether positive or negative) between NSSE, with its five benchmarks, and the contemporary outcomes that are of concern to stakeholders and policy makers, such as length of time a student takes to graduate from the school (Schmidt, 2005). Figure 1 represents a visual characterization of the concepts that frame this research. NSSE measures student engagement, which is considered a proxy for student learning, and, ostensibly, learning levels should affect graduation outcomes and grades. This type of logic model is typically used in studies of predictive validity when seeking to measure agreement between the results from the instrument being evaluated and findings obtained from more direct measurements (Litwin, 2002). As a



Running Head: Is NSSE Messy? 4‐5‐11 

tracking img