Measuring What Matters: A Case Study of Aligning Assessment Practices in Student Affairs with Institutionally Identified Student Learning Outcomes

dc.contributor.advisorHorn, Catherine L.
dc.contributor.committeeMemberMcKinney, Lyle
dc.contributor.committeeMemberWalker, Richard
dc.contributor.committeeMemberRoss, Frank E., III
dc.creatorShefman, Pamelyn Klepal
dc.date.accessioned2016-08-20T21:48:29Z
dc.date.available2016-08-20T21:48:29Z
dc.date.createdMay 2014
dc.date.issued2014-05
dc.date.updated2016-08-20T21:48:30Z
dc.description.abstractThis study documented the measurement development processes and alignment of learning outcomes for a student affairs division at a large, urban research institution. A significant contribution of this study was the identification of the extent to which assessment instruments used across a division effectively measured intended outcomes. The three research questions for this study were: 1. What processes did departments within a division of student affairs at a large urban research university use to develop assessment measures of student learning outcomes? 2. To what extent are department-level assessment measures aligned with the specific institution’s student learning outcomes? 3. To what extent do items used in measuring the institution’s specific student learning outcomes across student affairs departments agree with similarly identified constructs (based on departmental identification) across the division of student affairs? The theoretical framework and principles that foundationally guided this study were based on Biggs’ (1996) link to a constructivism framework within a higher education context, the need for and utility of aligning learning outcomes and the measures used both inside and outside of the classroom (Astin, 1993; Banta & Kuh, 1998; Kuh et al., 2007; Pace, 1980; Pascarella & Terenzini, 2005), and the role of assessment in a learning culture (Shepard, 2000). For this research, a case study of a division of student affairs from a metropolitan area in the mid-western United States (MMU) was conducted. In academic year 2012-2013, the year of this study, there were eight departments within the Division of Student Affairs at MMU; six of which participated. The methods employed in this study included: interviews of leadership within the division of student affairs; document analysis of the 34 instruments used; and intraclass correlation analysis utilizing a random sample of items (n = 147) across outcome assignments from cognitive-interviewed coding debriefers and the departments. This study found that in-house developed survey measures were most prominent across departments within a division of student affairs. Of the division’s 34 measures, 32 were developed by staff members with varying degrees of division-level input. Across all measures, 585 items were used with 394 assigned, by the departments, to measure institutional-identified student learning outcomes. Of this group, 171 items met the study’s rubric benchmark ranking for measuring these outcomes while none of the items met milestone or exemplar ratings. Primary student learning outcome agreement between the department and the coders met the threshold of ICC > 0.70 (Cicchetti, 1994) in all analyses. The debriefers’ primary codes were 81.3% in agreement as a group of coders (ICC(2,3) = .813, p<.001). Further the departments’ assignments were in agreement for 76.2%, 71.7% and 76.7% for code debriefer A, B, and C respectively (ICC(1,2) = .762, p<.001, ICC(1,2) = .717, p<.001, and ICC(1,2) = .767, p<.001). Scholars agree student learning outcomes should be measurable, meaningful, realistic, and ongoing while in the alignment with the institutional mission (Bresciani et al., 2004; Huba & Freed, 2000; Maki, 2004) and these findings expand this work. This study also highlights the competency needs for student affairs professional in assessment and instrument design particularly given the reliance on in-house developed measures supporting the efforts of NASPA & ACPA (2010). Further, this study suggests that more analysis is needed at an item level to investigate the potential of confounding across learning outcomes and create a richer understanding of item alignment. For practitioners, findings from this study serve as process documentation and provide guidance in the alignment of learning outcomes for student affairs divisions at postsecondary institutions.
dc.description.departmentEducational Psychology, Department of
dc.format.digitalOriginborn digital
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/10657/1424
dc.language.isoeng
dc.rightsThe author of this work is the copyright owner. UH Libraries and the Texas Digital Library have their permission to store and provide access to this work. Further transmission, reproduction, or presentation of this work is prohibited except with permission of the author(s).
dc.subjectStudent affairs
dc.subjectAssessments
dc.subjectStudent learning
dc.subjectStudent outcomes
dc.subjectIntraclass Correlation Coefficient
dc.subjectCase studies
dc.subjectHigher education
dc.subjectPostsecondary education
dc.subjectCulture of Assessment
dc.subjectCulture of Evidence
dc.titleMeasuring What Matters: A Case Study of Aligning Assessment Practices in Student Affairs with Institutionally Identified Student Learning Outcomes
dc.type.dcmiText
dc.type.genreThesis
thesis.degree.collegeCollege of Education
thesis.degree.departmentEducational Psychology, Department of
thesis.degree.disciplineEducational Psychology and Individual Differences
thesis.degree.grantorUniversity of Houston
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
SHEFMAN-DISSERTATION-2014.pdf
Size:
1.76 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LICENSE.txt
Size:
1.84 KB
Format:
Plain Text
Description: