Student Satisfaction with Training in Clinical, Counseling, and School Psychology PhD Programs: Factors Predicting Student Outcomes
MetadataShow full item record
APA accredited graduate programs are required to report student outcomes annually to the Commission on Accreditation (CoA) and make some of this information publicly available online (e.g., time to completion, attrition, and internship match rates). Prior research shows that some student outcomes are correlated with program components (e.g., advising, clarity of expectations, financial support, research emphasis, and departmental relationships) and associated with individual student characteristics (Callahan et al., 2013; de Valero, 2001). Although statistically significant in some studies, these correlations are small and there is no definitive set of characteristics that predict a training program’s success on student outcomes. Therefore, students, trainers, and evaluators are interested in finding other variables that predict success in APA accredited PhD programs. A promising new construct to predict outcomes is satisfaction. Student satisfaction with training has been linked to important constructs for graduate programs in psychology such as student recruitment (Borden, 1995; Golde, 2001), job satisfaction/burn-out post graduation (Huebner, 1993), student motivation and productivity, and program completion (Love, 1993). Studies of satisfaction assessing a variety of academic training domains exist in other fields (Chen et al., 2012; Gill et al., 2012), but few studies have examined the relationship between student satisfaction and outcomes in psychology graduate programs. The Psychology Program Satisfaction Survey (PPSS) was developed for this study to evaluate doctoral students’ satisfaction with their psychology PhD training. In the first phase of this study, the PPSS was pilot tested and its scales were refined based on a principal components analysis (PCA). The refined version had eight components: Research, Diversity, Relational Support, Clinical Assessment, Clinical Intervention, Academic Enablers, Practicum, and Coursework. To test the unique contributions of the PPSS scales, regression analyses were used to predict student outcomes after controlling for program components and student characteristics. Program components (i.e., having an on-site clinic and students’ stipend) were important and significant predictors of the programs’ average percent of internship matches to APA accredited sites via APPIC. In addition, student satisfaction with training was significantly associated with matching to an APA-accredited internship, above and beyond program variables. Given the importance of internship match rates for APA- accredited programs, the PPSS could be an important new tool in evaluating programs. Although the PPSS did not contribute unique variance to the prediction of attrition or time to completion within programs, providing students with a medium to express their feelings and experiences within their programs highlights the importance of students as stakeholders. Thus, future studies could examine if measuring student satisfaction improves student satisfaction. Pending replication, the CoA, prospective students, program faculty, and universities may consider using this measure to shape program policies and opportunities to maximize internship match rates. Further investigation is needed to improve prediction of time to completion and attrition, possibly with refinement of the PPSS or identifying other predictor variables.