Piloting a Short Form of the Academic Competence Evaluation Scales

Original Paper

Abstract

A growing body of research indicates that noncognitive factors are important predictors of students’ academic and life success (e.g., Garcia, The need to address noncognitive skills in the education policy agenda (Briefing Paper No. 386), http://files.eric.ed.gov/fulltext/ED558126.pdf, 2014). Despite this evidence base, there are few psychometrically sound measures of such factors appropriate for use in research and practice. One currently available measure is the Academic Competence Evaluation Scales (ACES; DiPerna and Elliott, Academic Competence Evaluation Scales, The Psychological Corporation, San Antonio, TX, 2000) which assesses the skills, attitudes, and behaviors of students that contribute to school success. The length of the ACES (73 items) may limit its use at the primary and secondary levels within a multi-tiered service delivery system or for large-scale educational research. To address this need, the current study piloted a short form of the ACES (ASF) with a sample of 301 elementary students. Results provided initial evidence for the reliability and validity of scores from the ASF.

Keywords

Noncognitive factors Assessment Academic enablers Academic skills Academic Competence Evaluation Scales 

Notes

Compliance with Ethical Standards

Conflict of interest

James DiPerna is the lead author of the Academic Competence Evaluation Scales.

Ethical Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.

Informed Consent

Informed consent and assent was obtained for all participants included in the study.

References

  1. Anthony, C. J., & DiPerna, J. C. (2017). Identifying sets of maximally efficient items from the Academic Competence Evaluation Scales—Teacher Form. School Psychology Quarterly, 32(4), 552.CrossRefPubMedGoogle Scholar
  2. Borghans, L., Duckworth, A. L., Heckman, J. J., & Ter Weel, B. (2008). The economics and psychology of personality traits. Journal of Human Resources, 43(4), 972–1059.Google Scholar
  3. Brady, C. E., Evans, S. W., Berlin, K., Bunford, N., & Kerns, L. (2012). Evaluating school impainnent with adolescents using the classroom performance survey. School Psychology Review, 41, 429–446.Google Scholar
  4. Cleary, T. J., Gubi, A., & Prescott, M. V. (2010). Motivation and self-regulation assessments: Professional practices and needs of school psychologists. Psychology in the Schools, 47, 985–1002.  https://doi.org/10.1002/pits.CrossRefGoogle Scholar
  5. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New Jersey: Lawrence Erlbaum.Google Scholar
  6. Credé, M., Harms, P., Niehorster, S., & Gaye-Valentine, A. (2012). An evaluation of the consequences of using short measures of the Big Five personality traits. Journal of Personality and Social Psychology, 102(4), 874–888.  https://doi.org/10.1037/a0027403.CrossRefPubMedGoogle Scholar
  7. Credé, M., Tynan, M. C., & Harms, P. D. (2017). Much ado about grit: A meta-analytic synthesis of the grit literature. Journal of Personality and Social Psychology, 113(3), 492–511.  https://doi.org/10.1037/pspp0000102.CrossRefPubMedGoogle Scholar
  8. Cronbach, L. J., & Furby, L. (1970). How we should measure” change”: Or should we? Psychological Bulletin, 74(1), 68.CrossRefGoogle Scholar
  9. Demaray, M. K., & Jenkins, L. N. (2011). Relations among academic enablers and academic achievement in children with and without high levels of parent-rated symptoms of inattention, impulsivity, and hyperactivity. Psychology in the Schools, 48, 573–586.  https://doi.org/10.1002/pits.20578.CrossRefGoogle Scholar
  10. DiPerna, J. C., & Elliott, S. N. (2000). Academic Competence Evaluation Scales. San Antonio, TX: The Psychological Corporation.Google Scholar
  11. Duckworth, A. L., & Yeager, D. S. (2015). Measurement matters: Assessing personal qualities other than cognitive ability for educational purposes. Educational Researcher, 44(4), 237–251.  https://doi.org/10.3102/0013189X15584327.CrossRefPubMedPubMedCentralGoogle Scholar
  12. Elliott, S. N., & Gresham, F. M. (2007). Social Skills Improvement System: Classwide Intervention Program guide. Bloomington, MN: Pearson Assessments.Google Scholar
  13. Farrington, C. A., Roderick, M., Allensworth, E., Nagaoka, J., Keyes, T. S., Johnson, D. W., et al. (2012). Teaching adolescents to become learners. The role of noncognitive factors in shaping school performance: A critical literature review. Chicago: University of Chicago Consortium on Chicago School Research.Google Scholar
  14. Field, A. (2009). Discovering statistics using SPSS. Sage publications.Google Scholar
  15. Garcia, E. (2014). The need to address noncognitive skills in the education policy agenda (Briefing Paper No. 386). Retrieved November 14, 2017 from Economic Policy Institute. http://files.eric.ed.gov/fulltext/ED558126.pdf.
  16. Gresham, F. M., & Elliott, S. N. (1990). Social Skills Rating System (SSRS). Circle Pines, MN: American Guidance Service.Google Scholar
  17. Gresham, F. M., & Elliott, S. N. (2008). Social Skills Improvement System-Rating Scales. Minneapolis, MN: Pearson Assessments.Google Scholar
  18. Gresham, F. M., Elliott, S. N., Cook, C. R., Vance, M. J., & Kettler, R. (2010). Cross-Informant agreement for ratings for social skill and problem behavior ratings: An investigation of the Social Skills Improvement System-Rating Scales. Psychological Assessment, 22, 157–166.  https://doi.org/10.1037/a0018124.CrossRefPubMedGoogle Scholar
  19. Gresham, F. M., Elliott, S. N., Vance, M. J., & Cook, C. R. (2011). Comparability of Social Skills Rating System to the Social Skills Improvement System: Content and psychometric comparisons across elementary and secondary age levels. School Psychology Quarterly, 26, 27–44.  https://doi.org/10.1037/a0022662.CrossRefGoogle Scholar
  20. Hambleton, R. K. (2010). Review of the Academic Competence Evaluation Scales. In R. A. Spies, J. F. Carlson, & K. F. Geisinger (Eds.), The eighteenth mental measurements yearbook (pp. 1–4). Lincoln, NE: Burns Institute of Mental Measurements.Google Scholar
  21. Heckman, J. J., Stixrud, J., & Urzua, S. (2006). The effects of cognitive and noncognitive abilities on labor market outcomes and social behavior. Journal of Labor Economics, 24(3), 411–482.CrossRefGoogle Scholar
  22. Hobart, J. C., Cano, S. J., Zajicek, J. P., & Thompson, A. J. (2007). Rating scales as outcome measures for clinical trials in neurology: Problems, solutions, and recommendations. The Lancet Neurology, 6(12), 1094–1105.CrossRefPubMedGoogle Scholar
  23. Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1–55.  https://doi.org/10.1080/10705519909540118.CrossRefGoogle Scholar
  24. Kautz, T., Heckman, J. J., Diris, R., Ter Weel, B., & Borghans, L. (2014). Fostering and measuring skills: Improving cognitive and non-cognitive skills to promote lifetime success (No. w20749). National Bureau of Economic Research.Google Scholar
  25. Kline, R. B. (2011). Principles and practices of structural equation modeling. New York, NY: Guilford.Google Scholar
  26. Malecki, C. K., & Elliot, S. N. (2002). Children’s social behaviors as predictors of academic achievement: A longitudinal analysis. School Psychology Quarterly, 17, 1–23.CrossRefGoogle Scholar
  27. McCormick, M. P., O’Connor, E. E., Cappella, E., & McClowry, S. G. (2013). Teacher–child relationships and academic achievement: A multilevel propensity score model approach. Journal of School Psychology, 51, 611–624.  https://doi.org/10.1016/j.jsp.2013.05.001.CrossRefPubMedGoogle Scholar
  28. McDermott, P. A., Green, L. F., Francis, J. M., & Stott, D. H. (1999). Learning Behaviors Scale. Philadelphia: Edumetric and Clinical Science.  https://doi.org/10.1521/scpq.17.1.1.19902.Google Scholar
  29. Muthén, L. K., & Muthén, B. O. (1998–2017). Mplus user’s guide (7th Ed.). Los Angeles, CA: Muthén & Muthén.Google Scholar
  30. Pearson. (2012). AIMSweb technical manual. Bloomington, MN: Pearson. Retrieved from www.aimsweb.com/wp-content/uploads/aimsweb-Technical-Manual.pdf.
  31. Renaissance Learning. (2012). STAR Math technical manual. Wisconsin Rapids, WI: Renaissance Learning.Google Scholar
  32. Renaissance Learning. (2015). STAR Reading technical manual. Wisconsin Rapids, WI: Renaissance Learning.Google Scholar
  33. Reynolds, C. R., & Kamphaus, R. W. (2004). Behavior assessment system for children (2nd ed.). Circle Pines, MN: American Guidance Service.Google Scholar
  34. Rhemtulla, M., Brosseau-Liard, P. É., & Savalei, V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological Methods, 17(3), 354–373.  https://doi.org/10.1037/a0029315.CrossRefPubMedGoogle Scholar
  35. Rosen, J. A., Glennie, E. J., Dalton, B. W., Lennon, J. M., & Bozick, R. N. (2010). Noncognitive skills in the classroom: New perspectives on educational research. Research Triangle Park, NC: RTI International.CrossRefGoogle Scholar
  36. Sabers, D. L., & Bonner, S. (2010). Review of the Academic Competence Evaluation Scales. In R. A. Spies, J. F. Carlson, & K. F. Geisinger (Eds.), The eighteenth mental measurements yearbook (pp. 4–6). Lincoln, NE: Burns Institute of Mental Measurements.Google Scholar
  37. Salvia, J., Ysselydke, J. E., & Bolt, S. (2010). Assessment in special and inclusive education (11th ed.). Boston: Houghton Mifflin.Google Scholar
  38. Schrank, F. A., McGrew, K. S., & Mather, N. (2014). Woodcock–Johnson IV tests of achievement. Rolling Meadows, IL: Riverside.Google Scholar
  39. Smith, G. T., McCarthy, D. M., & Anderson, K. G. (2000). On the sins of short-form development. Psychological Assessment, 12(1), 102–111.  https://doi.org/10.1037/1040-3590.12.1.102.CrossRefPubMedGoogle Scholar
  40. U.S. Department of Education Office for Civil Rights. (2016). Civil rights data collection: A first look. Retrieved December 12, 2017 from https://www2.ed.gov/about/offices/list/ocr/docs/2013-14-first-look.pdf.
  41. Volpe, R. J., DuPaul, G. J., DiPerna, J. C., Jitendra, A. K., Lutz, G. L., Tresco, K., et al. (2006). Attention deficity hyperactivity disorder and scholastic achievement: A model of mediation via academic enablers. School Psychology Review, 35, 47–61.Google Scholar
  42. Wechsler, D. (2002). Wechsler individual achievement test (2nd ed.). San Antonio, TX: Psychological Corp.Google Scholar
  43. West, M. R., Kraft, M. A., Finn, A. S., Martin, R. E., Duckworth, A. L., Gabrieli, C. F., et al. (2016). Promise and Paradox-measuring students’ non-cognitive skills and the impact of schooling. Educational Evaluation and Policy Analysis, 38, 148–170.  https://doi.org/10.3102/0162373715597298.CrossRefGoogle Scholar
  44. Zaslow, M., Halle, T., Martin, L., Cabrera, N., Calkins, J., Pitzer, L., et al. (2006). Child outcome measures in the study of child care quality. Evaluation Review, 30, 577–610.  https://doi.org/10.1177/0193841X06291529.CrossRefPubMedGoogle Scholar
  45. Zeehandelaar, D., & Winkler, A. M. (2013). What parents want: Education preferences and trade-offs. Washington, DC: Thomas B. Fordham Institute.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Teaching, Learning, and Educational SciencesOklahoma State UniversityStillwaterUSA
  2. 2.The Pennsylvania State UniversityUniversity ParkUSA

Personalised recommendations