Classroom Practices within the School Psychology Graduate Training Program: Assessing Interns' Competencies
Shannon E. Dowd-Eagle
John W. Eagle
Elizabeth Gibbons Holtzman
Over the past few decades, the field of school psychology has witnessed a paradigm shift in how services are provided to students and families. School psychology is moving away from its traditional service delivery model characterized by refer-test-place test, which centered on diagnosing a problem internal to the child via individualized IQ and achievement tests and then placing the child in a special education program (Iverson, 2002). The previous model underestimated the complexity of variables impacting the child (Graden, Casey & Bonstrom, 1985) and typically resulted in a discrepancy score, which often did not provide teachers with instructionally relevant results. These limitations served as an impetus for modifying the delivery of special education services to a problem-solving, response-to-intervention framework, or RTI (Reschly, 2008).
RTI is conceptualized as a three tiered model (e.g., Universal, Secondary and Tertiary) that includes a continuum of supports. Universal/Tier I interventions are available to all students and delivered within the general education classroom. Secondary Level/Tier II interventions are designed to provide tailored support to small groups of students who are identified as "at-risk" via universal screening procedures. Finally, Tier III offers intensive and individualized intervention to students who have not responded to Tier I or II strategies. This approach considers multiple variables both internal and external to the child, as well as the reciprocal interactions across systems. An emphasis is placed on ecological assessment that explores the child's individual capabilities in relation to environmental demands, and provides opportunities to establish a match between the child and environmental expectations. Within this framework, school psychologists aim to define the needs of the child and system, assess those needs, implement evidence-based interventions designed to enhance academic, social and/or behavioral functioning, and evaluate the effectiveness of the services.
Thus, classroom practices within the school psychology graduate training program have been revised to reflect this paradigm shift. Coursework and field-based experiences go beyond training in IQ and achievement testing, and place a greater emphasis on supporting schools in the development of a continuum of services. Students are trained in universal level screening, progress monitoring, evidence-based Tier I and II interventions and evaluation strategies to help determine what level of support is needed for each student.
Assessing Interns' Competencies
School Psychology interns spend the third and final year of the training program in a 1200 hour field-based internship. In conjunction with their field-based work, interns attend a weekly seminar and are required to complete a portfolio that provides a summative evaluation of required school psychological competencies. Interns, with support and supervision from college and field-based supervisors, complete three separate single-n case studies for the portfolio. For all case studies interns are required to provide evidence-based supports for students at-risk for (1) academic, (2) behavioral, and (3) social concerns. In all three cases, interns define the student's needs in measurable terms, conduct ecological assessments, implement an evidence-based intervention to address the need(s), and evaluate the effectiveness of the intervention via outcome data. Intervention outcomes are evaluated using a multi-modal approach that consists of three distinct measures.
Interns collect "pre" and "post" progress-monitoring data via direct observation. Effect sizes are calculated to compute outcomes based on differences in baseline and treatment phases within single-n case designs. As a standardized metric, an effect size of 1.0 indicates a positive outcome difference of 1 standard deviation between baseline and treatment phases. Effect sizes of .8 or above are considered large (Cohen, 1992)
Goal Attainment Scaling
Goal Attainment Scaling (GAS; Kiresuk, Smith, & Cardillo, 1994) is used to assess teachers' and caregivers' perceptions regarding the attainment of intervention goals. The GAS is a one-item measure that uses a 5-point scale ranging from -2 (situation is significantly worse) to +2 (goal fully met).
Social Validity Measure
Interns use an abbreviated version of the Behavior Intervention Rating Scale (BIRS: VonBrock & Elliott, 1987) to assess the social validity or clinical meaningfulness of the intervention. Specifically, the BIRS is used to evaluate teacher and caregiver perceptions' of the acceptability and effectiveness of interventions. The abbreviated BIRS contains 10 items rated on a six-point Likert scale (1 = not at all acceptable; 6 = highly acceptable). Items were selected from the Acceptability and Effectiveness subscales and overall mean rating from caregivers and teachers are reported.
The overall mean effect size for interns' skills related to the provision of academic interventions was 6.6, suggesting large effects associated with treatment plans designed to enhance the academic performance of at-risk students (Cohen, 1992). The overall mean rating on the GAS was 4.4 (out of a possible 5), suggesting that participants perceived the interventions as effective in attaining treatment goals. Finally, overall mean ratings on the acceptability and effectiveness of the interventions were 5.6 and 4.6 (out of a possible 6) respectively, indicating high levels of perceived acceptability and effectiveness.
The overall mean effect size for interns' skills on the behavioral intervention case study was 3.3 suggesting the interventions were effective in improving the students' behavioral functioning (Cohen, 1992). The overall mean rating on the GAS was 4.3 indicating that participants perceived the interventions as effective in attaining treatment goals. Finally, overall mean ratings on the acceptability and effectiveness of the interventions were 5.7 and 4.6 respectively, suggesting high levels of satisfaction with the interventions.
Interns' skills related to the delivery of counseling services for at-risk students received an overall mean effect size of 2.4, indicating the interventions were highly effective in addressing the psychological issues reported by the students (Cohen, 1992). The mean rating on the GAS was 4.3, suggesting participants' perceived treatment goals were being met. Mean scores on the acceptability and effectiveness factor of the abbreviated BIRS were 5.7 and 4.6, indicating participants felt interventions implemented within the context of the counseling sessions were an acceptable and effective way to address student needs.
The paradigm shift within the field has strongly influenced graduate training and classroom practices within the School Psychology program at Rhode Island College. Not only do we as faculty aim to prepare our graduates with the ability to provide evidence-based practice in school settings, but also with the capabilities of documenting the effectiveness and clinical meaningfulness of those practices. To highlight this philosophical change in training and practice, the School Psychology program hosts an evidence-based practice symposium at which interns present outcomes from one of their case studies via a poster session. This annual event serves as an opportunity to showcase the collaborative efforts conducted among the intern, school districts and college. Further, interns are encouraged to, and have, successfully extended the dissemination of their case studies to national conferences. Our continued hope is to incorporate training experiences into the curriculum that promote school psychologists as reflective, scientist-practitioners who select empirically-validated interventions to guide their practice and in turn disseminate the findings of their field-based work to further inform school-based services. As a faculty, this shift has allowed us to move beyond teaching and modeling evidence-based and data driven practice to partners in bridging the research to practice gap.
Cohen, J. (1992) A power primer. Psychological bulletin, 112, 155-159.
Graden, J.L., Casey, A., & Bonstrom, O. (1985). Implementing a prereferral intervention system: II. The data. Exceptional Children, 51, 487-496.
Iverson, A.M. (2002). Best practices in problem-solving team structure and process. In A. Thomas & J. Grimes (Eds.), Best Practices in School Psychology (4th ed., pp, 657-670). Washington, DC: National Association of School Psychologists.
Kiresuk, T.J., Smith, A., & Cardillo, J.E. (1994). Goal Attainment Scaling: Applications, Theory, and Measurement. Hillsdale, NJ: Erlbaum.
Reschly, D.J., (2008). School psychology paradigm shift and beyond. In A. Thomas, & J Grimes (Eds.), Best Practices in School Psychology V (pp 3-15). Bethesda, MD: National Association of School Psychologists.
Von Brock, M.B., & Elliott, S.N. (1987). Influence of treatment effectiveness information on the acceptability of classroom interventions. Journal of School Psychology, 25, 131-144.