Measuring What Matters: A Case Study of Aligning Assessment Practices in Student Affairs with Institutionally Identified Student Learning Outcomes

Date

2014-05

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

This study documented the measurement development processes and alignment of learning outcomes for a student affairs division at a large, urban research institution. A significant contribution of this study was the identification of the extent to which assessment instruments used across a division effectively measured intended outcomes. The three research questions for this study were:

  1. What processes did departments within a division of student affairs at a large urban research university use to develop assessment measures of student learning outcomes?

  2. To what extent are department-level assessment measures aligned with the specific institution’s student learning outcomes?

  3. To what extent do items used in measuring the institution’s specific student learning outcomes across student affairs departments agree with similarly identified constructs (based on departmental identification) across the division of student affairs?

The theoretical framework and principles that foundationally guided this study were based on Biggs’ (1996) link to a constructivism framework within a higher education context, the need for and utility of aligning learning outcomes and the measures used both inside and outside of the classroom (Astin, 1993; Banta & Kuh, 1998; Kuh et al., 2007; Pace, 1980; Pascarella & Terenzini, 2005), and the role of assessment in a learning culture (Shepard, 2000).

For this research, a case study of a division of student affairs from a metropolitan area in the mid-western United States (MMU) was conducted. In academic year 2012-2013, the year of this study, there were eight departments within the Division of Student Affairs at MMU; six of which participated. The methods employed in this study included: interviews of leadership within the division of student affairs; document analysis of the 34 instruments used; and intraclass correlation analysis utilizing a random sample of items (n = 147) across outcome assignments from cognitive-interviewed coding debriefers and the departments.

This study found that in-house developed survey measures were most prominent across departments within a division of student affairs. Of the division’s 34 measures, 32 were developed by staff members with varying degrees of division-level input. Across all measures, 585 items were used with 394 assigned, by the departments, to measure institutional-identified student learning outcomes. Of this group, 171 items met the study’s rubric benchmark ranking for measuring these outcomes while none of the items met milestone or exemplar ratings. Primary student learning outcome agreement between the department and the coders met the threshold of ICC > 0.70 (Cicchetti, 1994) in all analyses. The debriefers’ primary codes were 81.3% in agreement as a group of coders (ICC(2,3) = .813, p<.001). Further the departments’ assignments were in agreement for 76.2%, 71.7% and 76.7% for code debriefer A, B, and C respectively (ICC(1,2) = .762, p<.001, ICC(1,2) = .717, p<.001, and ICC(1,2) = .767, p<.001).

Scholars agree student learning outcomes should be measurable, meaningful, realistic, and ongoing while in the alignment with the institutional mission (Bresciani et al., 2004; Huba & Freed, 2000; Maki, 2004) and these findings expand this work. This study also highlights the competency needs for student affairs professional in assessment and instrument design particularly given the reliance on in-house developed measures supporting the efforts of NASPA & ACPA (2010). Further, this study suggests that more analysis is needed at an item level to investigate the potential of confounding across learning outcomes and create a richer understanding of item alignment. For practitioners, findings from this study serve as process documentation and provide guidance in the alignment of learning outcomes for student affairs divisions at postsecondary institutions.

Description

Keywords

Student affairs, Assessments, Student learning, Student outcomes, Intraclass Correlation Coefficient, Case studies, Higher education, Postsecondary education, Culture of Assessment, Culture of Evidence

Citation