Evaluating the Impact of National and State Policies on School-Based Counseling Practices and Student Outcomes



There is perhaps no other time in recent history that provides such a compelling case for the essential role of school-based counselors as today. Advocates of school-based counselors have stressed the importance of organizational structure to school-based counseling programs, a core curriculum, and program evaluation for improvement and accountability. This chapter argues for the essential role of statewide evaluation in supporting policies that promote the development and implementation of effective school-based counseling programs. The rationale is that the state often has the most leverage to bring about broad change within its borders. To this end, the chapter provides a content analysis of existing statewide evaluation studies found in the literature and assesses the evaluations against the National Leadership Cadre (2007) suggestions for statewide evaluations. Seven recommendations are provided that include (1) expansion of cost-effective, statewide evaluations; (2) use of evaluation strategies that support outcome evaluation; (3) wider use of methodological approaches and evaluation strategies; (4) deliberate evaluation of promising practices; (5) promotion of evaluation use; (6) finding ways to take advantage of existing policy research studies in promoting school-based counseling as a viable education reform tool; and (7) reaching out to the broader education policy and policy research community to argue for school-based counselors as essential personnel in the drive toward attainment of critical student outcomes.


  1. American Evaluation Association. (2004). American evaluation association guiding principles for evaluators. Accessed 15 Jan 2015.
  2. American School Counselor Association. (2012). The ASCA National Model: A framework for school counseling programs (3rd ed.). Alexandria, VA: Author.Google Scholar
  3. Barley, Z. A., & Jenness, M. (1993). Cluster evaluation: A method to strengthen evaluation in smaller programs with similar purposes. Evaluation Practice, 14, 141–147.CrossRefGoogle Scholar
  4. Borders, L. D., & Drury, S. M. (1992). Comprehensive school counseling programs: A review for policymakers and practitioners. Journal of Counseling & Development, 70(4), 487–498.CrossRefGoogle Scholar
  5. Byrne, D. (2013). Editorial. Evaluation: The International Journal of Theory, Research and Practice, 19(3), 213–216.CrossRefGoogle Scholar
  6. California Department of Education. (2003). Assembly bill 722: Study of pupil personnel ratios, services, and programs. Sacramento, CA: Counseling and Student Support Office.Google Scholar
  7. Campbell, M. (2016). Policy, policy research on school-based counseling in Australia. In J. C. Carey, B. Harris, S. M. Lee, & J. Mushaandja (Eds.), International handbook for policy research on school-based counseling. Zug, Switzterland: Springer.Google Scholar
  8. Carey, J., Harrity, J., & Dimmitt, C. (2005). The development of a self-assessment instrument to measure a school district's readiness to implement the ASCA national model. Professional School Counseling, 8(5), 305–312.Google Scholar
  9. Carey, J., & Dimmitt, C. (2012). School counseling and student outcomes: Summary of six statewide studies. Professional School Counseling, 16(2), 146–153.CrossRefGoogle Scholar
  10. Carey, J., Harrington, K., Martin, I., & Hoffman, D. (2012). A statewide evaluation of the outcomes of the implementation of ASCA National Model school counseling programs in rural and suburban Nebraska high schools. Professional School Counseling, 16(2), 100–107.CrossRefGoogle Scholar
  11. Carey, J., Harrington, K., Martin, I., & Stevenson, D. (2012). A statewide evaluation of the outcomes of the implementation of ASCA national model school counseling programs in Utah high schools. Professional School Counseling, 16(2), 89–99.CrossRefGoogle Scholar
  12. Carey, J., & Martin, I. (2015). A review of the major school counseling policy studies in the United States: 2000 through 2014. Research Monograph, No. 4. Amherst, MA: The Ronald H. Frederickson Center for School Counseling Outcome Research & Evaluation.Google Scholar
  13. Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco: Jossey-Bass Publishers.Google Scholar
  14. Dahir, C. A. (2004). Supporting a nation of learners: The role of school counseling in educational reform. Journal of Counseling and Development, 82, 344–353.CrossRefGoogle Scholar
  15. Dimmitt, C. (2009). Why evaluation matters: Determining effective school counseling practices. Professional School Counseling, 12(6), 395–399.CrossRefGoogle Scholar
  16. Dimmitt, C., & Wilkerson, B. (2012). Comprehensive school counseling in Rhode Island: Access to services and student outcomes. Professional School Counseling, 16(2), 125–135.CrossRefGoogle Scholar
  17. Elsner, D., & Carey, J. C. (2005). School counseling program implementation survey. Unpublished assessment instrument.Google Scholar
  18. Erickson, F., & Gutierrez, K. (2002). Culture, rigor, and science in educational research. Educational Researcher, 31(8), 21–24.CrossRefGoogle Scholar
  19. Feuer, M. J., Towne, L., & Shavelson, R. J. (2002). Scientific culture and educational research. Educational Researcher, 31(8), 4–14.CrossRefGoogle Scholar
  20. Feuer, M. J. (2009). Commentary: Disciplined education policy research. In G. Sykes, B. Schneider, & D. N. Plank (Eds.), Handbook of education policy research (pp. 101–105). New York/Routledge/Washington, D. C: American Educational Research Association.Google Scholar
  21. Gysbers, N. C., & Moore, E. J. (1981). Improving guidance programs. Englewood Cliffs, NJ: Prentice Hall.Google Scholar
  22. Gysbers, N. C., & Henderson, P. (1994). Developing and managing your school guidance program (2nd ed.). Alexandria, VA: American Counseling Association.Google Scholar
  23. Gysbers, N. C. (2006). Improving school guidance and counseling practices through effective and sustained state leadership: A response to Miller. Professional School Counseling, 9(3), 245–247.CrossRefGoogle Scholar
  24. Herrell, J. M., & Straw, R. B. (Eds.). (2002). Conducting multi-site evaluations in real world settings, New directions for evaluation, number (Vol. 94). San Francisco: Jossey-Bass.Google Scholar
  25. Hurwitz, M., & Howell, J. (2014). Estimating causal impacts of school counselors with regression discontinuity designs. Journal of Counseling & Development, 92(3), 316–327.CrossRefGoogle Scholar
  26. Institute of Education Sciences. (2013). Common guidelines for education research and development: A report from the Institute of Education Sciences and the National Science Foundation. Washington, D.C.: U.S. Department of Education.Google Scholar
  27. Lapan, R. T., Gysbers, N. C., & Sun, Y. (1997). The impact of more fully implemented guidance programs on the school experiences of high school students: A statewide evaluation study. Journal of Counseling & Development, 75(4), 292–302.CrossRefGoogle Scholar
  28. Lapan, R. T., Gysbers, N. C., & Petroski, G. F. (2001). Helping seventh graders be safe and successful: A statewide study of the impact of comprehensive guidance and counseling programs. Journal of Counseling & Development, 79(3), 320–330.CrossRefGoogle Scholar
  29. Lapan, R. T., Gysbers, N. C., & Kayson, M. (2006). The relationship between implementation of the Missouri comprehensive guidance program and student academic achievement. Columbia, MO: University of Missouri.Google Scholar
  30. Lapan, R. T., Gysbers, N. C., Stanley, B., & Pierce, M. E. (2012). Missouri professional school counselors: Ratios matter, especially in high-poverty schools. Professional School Counseling, 16(2), 108–116.CrossRefGoogle Scholar
  31. Lapan, R. T., Whitcomb, S. A., & Aleman, N. M. (2012). Connecticut professional school counselors: College and career counseling services and smaller ratios benefit students. Professional School Counseling, 16(2), 117–124.CrossRefGoogle Scholar
  32. Martin, I., Carey, J., & DeCoster, K. (2009). A national study of the current status of state school counseling models. Professional School Counseling, 12(5), 378–386.CrossRefGoogle Scholar
  33. Martin, I., & Carey, J. (2012). Evaluation capacity within state-level school counseling programs: A cross-case analysis. Professional School Counseling, 15(3), 132–143.CrossRefGoogle Scholar
  34. Martin, I., & Carey, J. (2014). Development of a logic model to guide evaluations of the ASCA national model for school counseling programs. The Professional Counselor, 4, 455–466.CrossRefGoogle Scholar
  35. Martin, I., & Rallis, S. (2014). Building on strengths and addressing challenges: Enhancing external school counseling program evaluation. Journal of School Counseling, 12(10), 1–29.Google Scholar
  36. Mayne, J. (2014). Issues in enhancing evaluation use. In M. L. Loud & J. Mayne (Eds.), Enhancing evaluation use: Insights from internal evaluation units (pp. 1–14). Thousand Oaks, CA: Sage.Google Scholar
  37. McGannon, W., Carey, J., & Dimmitt, C. (2005). The current status of school counseling outcome research. Research monograph, number 2. Amherst, MA: The Ronald H. Frederickson Center for School Counseling Outcome Research.Google Scholar
  38. Miller, G. D. (2006). How collaboration and research can affect school counseling practices: The Minnesota story. Professional School Counseling, 9(3), 238–244.CrossRefGoogle Scholar
  39. National Leadership Cadre. (2007). A multi-level school counseling outcome evaluation model. Accessed 19 Feb 2015.
  40. National Research Council. (2002). In R. J. Shavelson & L. Towne (Eds.), Committee on Scientific Principles for Educational Research Scientific research in education. Washington, DC: National Academy Press.Google Scholar
  41. Nelson, D. E., Fox, D. G., Haslam, M., & Gardner, J. (2007). An evaluation of Utah’s comprehensive guidance program: The fourth major study of Utah’s thirteen-year program. Salt Lake City, UT: The Institute for Behavioral Research in Creativity.Google Scholar
  42. Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: SAGE Publications, Inc.Google Scholar
  43. Patton, M. Q. (2015). Qualitative research and evaluation methods: Integrating theory and practice (4th ed.). Thousand Oaks: SAGE Publications, Inc.Google Scholar
  44. Pellegrino, J. W., & Goldman, S. R. (2002). Be careful what you wish for – You may get it: Educational research in the spotlight. Educational Researcher, 31(8), 15–17.CrossRefGoogle Scholar
  45. Redding, S., & Rhim, L. M. (2014). Evolution of school turnaround. In L. M. Rhim & S. Redding (Eds.), The state role in school turnaround: Emerging best practices (p. 10). Charlotte, NC: Information Age Publishing, Inc.Google Scholar
  46. Rhim, L. M., & Redding, S. (Eds.). (2014). The state role in school turnaround: Emerging best practices. Charlotte, NC: Information Age Publishing, Inc.Google Scholar
  47. Rog, D. J. (2010). Designing, managing, and analyzing multisite evaluations. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 208–236). San Francisco: Jossey-Bass.Google Scholar
  48. Salina, C., Girtz, S., Eppinga, J., Martinez, D., Kilian, D. B., Lozano, E., et al. (2013). All hands on deck: A comprehensive, results-driven counseling model. Professional School Counseling, 17(1), 63–75.CrossRefGoogle Scholar
  49. Shadish, W. R. (2000). The empirical program of quasi-experimentation. In L. Bickman (Ed.), Research design: Donald Campbell’s legacy (pp. 13–36). Thousand, Oaks: Sage Publications.Google Scholar
  50. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.Google Scholar
  51. Sink, C. A., & Stroh, H. R. (2003). Raising achievement test scores of early elementary school students through comprehensive school counseling programs. Professional School Counseling, 6(5), 350–364.Google Scholar
  52. Sink, C. A., Akos, P., Turnbull, R. J., & Mvududu, N. (2008). An investigation of comprehensive school counseling programs and academic achievement in Washington state middle schools. Professional School Counseling, 12(1), 43–53.CrossRefGoogle Scholar
  53. Smith, M. S., & O’Day, J. (1991). Systematic school reform. In S. Furhman & B. Malen (Eds.), The politics of curriculum and lasting (pp. 233–267). London/New York: Falmer Press.Google Scholar
  54. Spillane, J. P., Gomez, L. M., & Mesler, L. (2009). Notes on reframing the role of organizations in policy implementation. In G. Sykes, B. Schneider, & D. N. Plank (Eds.), Handbook of education policy research (pp. 409–425). New York/Routledge/Washington, D.C: American Educational Research Association.Google Scholar
  55. Straw, R. B., & Herrell, J. M. (2002). A framework for understanding and improving multisite evaluations. In J. M. Herrell & R. B. Straw (Eds.), Conducting multiple site evaluations in real world settings, New directions for evaluation, number 94. San Francisco: Jossey-Bass.Google Scholar
  56. Trevisan, M. S. (2000). The status of program evaluation expectations in state school counselor certification requirements. American Journal of Evaluation, 21(1), 81–94.CrossRefGoogle Scholar
  57. Trevisan, M. S., & Walser, T. M. (2015). Evaluability assessment: Improving evaluation quality and use. Los Angeles: SAGE Publications, Inc.CrossRefGoogle Scholar
  58. Weiss, C. H. (1988). Evaluation for decisions: Is anybody there? Does anybody care? Evaluation Practice, 9(1), 5–19.CrossRefGoogle Scholar
  59. Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19(1), 21–34.CrossRefGoogle Scholar
  60. Whiston, S. C., Tai, W. L., Rahardja, D., & Eder, K. (2011). School counseling outcome: A meta-analytic examination of interventions. Journal of Counseling & Development, 89(1), 37–55.CrossRefGoogle Scholar
  61. Wholey, J. S. (1979). Evaluation: Promise and performance. Washington, DC: Urban Institute.Google Scholar
  62. Wilkerson, K., Pérusse, R., & Hughes, A. (2013). Comprehensive school counseling programs and student achievement outcomes: A comparative analysis of Ramp versus non-Ramp schools. Professional School Counseling, 16(3), 172–184.CrossRefGoogle Scholar
  63. Worthen, B. R., & Schmitz, C. C. (1997). Conceptual challenges confronting cluster evaluation. Evaluation, 3, 300–310.CrossRefGoogle Scholar
  64. Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage.Google Scholar
  65. Yin, R. I. (1989). Case study research: Design and methods. Newbury Park, CA: Sage.Google Scholar
  66. Yin, R. K. (1993). Applications of case study research. Applied social research methods series, volume 34. Newbury Park, CA: Sage.Google Scholar
  67. Yin, R. K. (2013). Validity and generalization in future case study evaluations. Evaluation: The International Journal of Theory, Research and Practice, 19(3), 321–332.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Washington State UniversityPullmanUSA

Personalised recommendations