Assessing Teacher Usability of Written Expression Curriculum-Based Measurement

Date

2016-08

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Despite the critical value of writing in school and beyond, the National Assessment of Educational Progress recently indicated that 76% of students at both grades eight and 12 performed below the proficient level in writing (NAEP, 2011). This indicates a need to identify students at-risk for poor writing performance. While current research supports Written Expression Curriculum-Based Measurement (WE-CBM) as a valid indicator of writing proficiency (e.g. Ritchey & Coker, 2012), it is less commonly used in practice. The present study assessed factors influencing the usability of WE-CBM using the Usage Rating Profile-Assessment (URP-A). Research questions included: 1) Do teachers rate WE-CBM to be usable? 2) Do teachers’ descriptions of their previous CBM training, frequency of CBM use, and special education teaching status contribute to the WE-CBM Usability score? and 3) Will teacher usability ratings significantly differ between WE-CBM and Reading CBM (R-CBM), controlling for teachers’ descriptions of their previous levels of CBM training? Participants included 162 teachers from schools in the southern US who were introduced to and practiced scoring WE-CBM and R-CBM. Participants then completed the URP-A about the use of each CBM type as a universal screening measure. Results indicated that although teachers slightly agreed that WE-CBM is usable, teachers rated R-CBM significantly higher than WE-CBM in overall Usability, Feasibility, Understanding, System Support, and System Climate. No significant differences in Acceptability were found between R-CBM and WE-CBM. Practical implications are discussed with the goal of informing modifications in WE-CBM’s existing format to make it more usable in practice.

Description

Keywords

Assessments, Usability, Acceptability, Research-to-practice gap, Systems theory

Citation