Skip to main content

Assessing Algorithmic and Computational Thinking in K-12: Lessons from a Middle School Classroom

Part of the Educational Communications and Technology: Issues and Innovations book series (ECTII)

Abstract

As educators move to introduce computing in K-12 classrooms, the issue of assessing student learning of computational concepts, especially in the context of introductory programming, remains a challenge. Assessments are central if the goal is to help students develop deeper, transferable computational thinking (CT) skills that prepare them for success in future computing experiences. This chapter argues for the need for multiple measures or “systems of assessments” that are complementary, attend to cognitive and noncognitive aspects of learning CT, and contribute to a comprehensive picture of student learning. It describes the multiple forms of assessments designed and empirically studied in Foundations for Advancing Computational Thinking, a middle school introductory computing curriculum. These include directed and open-ended programming assignments in Scratch, multiple-choice formative assessments, artifact-based interviews, and summative assessments to measure student learning of algorithmic constructs. The design of unique “preparation for future learning” assessments to measure transfer of CT from block-based to text-based code snippets is also described.

Keywords

  • Computational thinking
  • Middle school computer science
  • K-12 computer science education
  • Deeper learning
  • Systems of assessments
  • Algorithmic thinking

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-319-52691-1_17
  • Chapter length: 20 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   109.00
Price excludes VAT (USA)
  • ISBN: 978-3-319-52691-1
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   139.99
Price excludes VAT (USA)
Hardcover Book
USD   199.99
Price excludes VAT (USA)
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Notes

  1. 1.

    In an ongoing NSF-funded collaborative research effort, SRI International and Carnegie Mellon University are examining ways of automating assessment using log data from the Fairy Assessment in Alice captured by Denner and Werner. We are employing a combination of computational learning analytics/educational data mining techniques and the ECD framework to study students’ programming process and automate the assessment of programming tasks such as the Fairy Assessment. Grover, Basu, & Bienkowski (2017) & Grover et al. (2017) provide a glimpse of our work in progress using this computational psychometrics approach.

  2. 2.

    FACT’s quizzes and summative assessment have been shared on the assessment platform, Edfinity. http://edfinity.com/join/9EQE9DT8.

  3. 3.

    This question is at the heart of my ongoing NSF-funded research project (#1543062) at SRI International being conducted in partnership with San Francisco Unified School District (https://www.sri.com/work/projects/middle-school-computer-science).

References

  • Astrachan, O., Barnes, T., Garcia, D. D., Paul, J., Simon, B., & Snyder, L. (2011). CS principles: piloting a new course at national scale. In Proceedings of the 42nd ACM technical symposium on computer science education (pp. 397–398). ACM.

    Google Scholar 

  • Barron, B. (2004). Learning ecologies for technological fluency: Gender and experience differences. Journal of Educational Computing Research, 31(1), 1–36.

    CrossRef  Google Scholar 

  • Barron, B., & Daring-Hammond, L. (2008). How can we teach for meaningful learning? In L. Daring-Hammond, B. Barron, P. D. Pearson, A. H. Schoenfeld, E. K. Stage, T. D. Zimmerman, G. N. Cervetti, & J. L. Tilson (Eds.), Powerful learning: What we know about teaching for understanding. San Francisco, CA: Jossey-Bass.

    Google Scholar 

  • Barron, B., Martin, C., Roberts, E., Osipovich, A., & Ross, M. (2002). Assisting and assessing the development of technological fluencies: Insights from a project-based approach to teaching computer science. In Proceedings of the conference on computer support for collaborative learning: Foundations for a CSCL community (pp. 668–669). International Society of the Learning Sciences.

    Google Scholar 

  • Bienkowski, M., Snow, E., Rutstein, D. W., & Grover, S. (2015). Assessment design patterns for computational thinking practices in secondary computer science: A First Look (SRI Technical Report) Menlo Park, CA: SRI International. Retrieved from http://pact.sri.com/resources.html

  • Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Granada Learning.

    Google Scholar 

  • Bornat, R. (1987). Programming from first principles. Englewood Cliffs, NJ: Prentice Hall International.

    Google Scholar 

  • Bransford, J. D., & Schwartz, D. L. (1999). Rethinking transfer: A simple proposal with multiple implications. In A. Iran-Nejad & P. D. Pearson (Eds.), Review of research in education (Vol. 24, pp. 61–101). Washington, DC: American Educational Research Association.

    Google Scholar 

  • Bransford, J. D., Brown, A., & Cocking, R. (2000). How people learn: Mind, brain, experience and school (Expanded ed.). Washington, DC: National Academy.

    Google Scholar 

  • Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development of computational thinking. In Proceedings of the 2012 annual meeting of the American Educational Research Association. Vancouver, Canada.

    Google Scholar 

  • Chin, D. B., Dohmen, I. M., Cheng, B. H., Oppezzo, M. A., Chase, C. C., & Schwartz, D. L. (2010). Preparing students for future learning with Teachable Agents. Educational Technology Research and Development, 58(6), 649–669.

    CrossRef  Google Scholar 

  • College Board. (2014). AP computer science principles: Performance assessment. Retrieved from https://advancesinap.collegeboard.org/stem/computer-science-principles/course-details.

  • Conley, D. T., & Darling-Hammond, L. (2013). Creating systems of assessment for deeper learning. Stanford, CA: Stanford Center for Opportunity Policy in Education.

    Google Scholar 

  • Cooper, S. (2010). The design of Alice. ACM Transactions on Computing Education (TOCE), 10(4), 15.

    Google Scholar 

  • Cooper, S., Grover, S., Guzdial, M., & Simon, B. (2014). A future for computing education research. Communications of the ACM, 57(11), 34–36.

    CrossRef  Google Scholar 

  • Dede, C. (2009). Immersive interfaces for engagement and learning. Science, 323(5910), 66–69.

    CrossRef  Google Scholar 

  • Denny, P., Luxton-Reilly, A., & Simon, B. (2008). Evaluating a new exam question: Parsons problems. In Proceedings of the fourth international workshop on computing education research (pp. 113–124). ACM.

    Google Scholar 

  • du Boulay, B. (1986). Some difficulties of learning to program. Journal of Educational Computing Research, 2(1), 57–73.

    Google Scholar 

  • Engle, R. A., Lam, D. P., Meyer, X. S., & Nix, S. E. (2012). How does expansive framing promote transfer? Several proposed explanations and a research agenda for investigating them. Educational Psychologist, 47(3), 215–231.

    Google Scholar 

  • Ebrahimi, A. (1994). Novice programmer errors: Language constructs and plan composition. International Journal of Human-Computer Studies, 41(4), 457–480.

    Google Scholar 

  • Ericson, B., & McKlin, T. (2012). Effective and sustainable computing summer camps. In Proceedings of the 43rd ACM technical symposium on computer science education (pp. 289–294). ACM.

    Google Scholar 

  • Fields, D. A., Quirke, L., Amely, J., & Maughan, J. (2016). Combining big data and thick data analyses for understanding youth learning trajectories in a summer coding camp. In Proceedings of the 47th ACM technical symposium on computing science education (pp. 150–155). ACM.

    Google Scholar 

  • Fields, D. A., Searle, K. A., Kafai, Y. B., & Min, H. S. (2012). Debuggems to assess student learning in e-textiles. In Proceedings of the 43rd SIGCSE technical symposium on computer science education. New York, NY: ACM.

    Google Scholar 

  • Fletcher, G. H., & Lu, J. J. (2009). Human computing skills: Rethinking the K-12 experience. Communications of the ACM, 52(2), 23–25.

    CrossRef  Google Scholar 

  • Gentner, D., Loewenstein, J., & Thompson, L. (2003). Learning and transfer: A general role for analogical encoding. Journal of Educational Psychology, 95(2), 393–408.

    Google Scholar 

  • Glass, A. L., & Sinha, N. (2013). Providing the answers does not improve performance on a college final exam. Educational Psychology, 33(1), 87–118.

    Google Scholar 

  • Goode, J., Chapman, G., & Margolis, J. (2012). Beyond curriculum: The exploring computer science program. ACM Inroads, 3(2), 47–53.

    CrossRef  Google Scholar 

  • Grover, S. (2011). Robotics and engineering for Middle and High School students to develop computational thinking. Paper presented at the annual meeting of the American Educational Research Association. New Orleans, LA.

    Google Scholar 

  • Grover, S., & Pea, R. (2013). Computational thinking in K–12: A review of the state of the field. Educational Researcher, 42(1), 38–43.

    CrossRef  Google Scholar 

  • Grover, S. & Pea, R. (2016). Designing for deeper learning in a blended computer science course for Middle School: A design-based research approach. In Proceedings of the 12th international conference of the learning sciences, Singapore.

    Google Scholar 

  • Grover, S. & Basu, S. (2017). Measuring student learning in introductory block-based programming: Examining misconceptions of loops, variables, and boolean logic. In: Proceedings of the 48th ACM Technical Symposium on Computer Science Education (SIGCSE ’17). Seattle, WA. ACM.

    Google Scholar 

  • Grover, S., Bienkowski, M., Basu, S., Eagle, M., Diana, N. & Stamper, J. (2017). A framework for hypothesis-driven approaches to support data-driven learning analytics In measuring computational thinking in block-based programming. In: Proceedings of the 7th International Learning Analytics & Knowledge Conference (2017). Vancouver, CA. ACM.

    Google Scholar 

  • Grover, S. Basu, S., & Bienkowski, M. (2017). Designing programming tasks for measuring computational thinking. In: Proceedings of the Annual Meeting of the American Educational Research Association. San Antonio, TX.

    Google Scholar 

  • Grover, S., Pea, R., & Cooper, S. (2014b). Remedying misperceptions of computer science among Middle School students. In Proceedings of the 45th ACM technical symposium on computer science education (pp. 343–348). ACM.

    Google Scholar 

  • Grover, S., Pea, R., & Cooper, S. (2015). Designing for deeper learning in a blended computer science course for Middle School students. Computer Science Education, 25(2), 199–237.

    CrossRef  Google Scholar 

  • Grover, S., Pea, R., & Cooper, S. (2016a). Factors influencing computer science learning in Middle School. In Proceedings of the 47th ACM technical symposium on computing science education (pp. 552–557). ACM.

    Google Scholar 

  • Koh, K. H., Nickerson, H., Basawapatna, A., & Repenning, A. (2014). Early validation of computational thinking pattern analysis. In Proceedings of the 2014 conference on innovation and technology in computer science education (pp. 213–218). ACM.

    Google Scholar 

  • Kurland, D. M., & Pea, R. D. (1985). Children’s mental models of recursive LOGO programs. Journal of Educational Computing Research, 1(2), 235–243.

    CrossRef  Google Scholar 

  • Lee, M. J., Ko, A. J., & Kwan, I. (2013). In-game assessments increase novice programmers’ engagement and level completion speed. In Proceedings of the ninth annual international ACM conference on international computing education research (pp. 153–160). ACM.

    Google Scholar 

  • Lewis, C. M., et al. (2013). Online curriculum. Retrieved from http://colleenmlewis.com/scratch.

    Google Scholar 

  • Lopez, M., Whalley, J., Robbins, P., & Lister, R. (2008). Relationships between reading, tracing and writing skills in introductory programming. In: Proceedings of the Fourth international Workshop on Computing Education Research(pp. 101–112). ACM.

    Google Scholar 

  • Martin, C. K., Walter, S., & Barron, B. (2009). Looking at learning through student designed computer games: A rubric approach with novice programming projects. Unpublished paper, Stanford University.

    Google Scholar 

  • Meerbaum-Salant, O., Armoni, M., & Ben-Ari, M. (2010). Learning computer science concepts with scratch. In Proceedings of the sixth international workshop on computing education research (ICER ‘10) (pp. 69–76). New York, NY: ACM.

    Google Scholar 

  • Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). Focus article: On the structure of educational assessments. Measurement: Interdisciplinary research and perspectives, 1(1), 3–62.

    Google Scholar 

  • Moreno-León, J., Robles, G., & Román-González, M. (2015). Dr. Scratch: Automatic analysis of scratch projects to assess and foster computational thinking. Revista de Educación a Distancia, 46. doi:10.6018/red/4.

  • Morrison, B. B., Margulieux, L. E., & Guzdial, M. (2015). Subgoals, context, and worked examples in learning computing problem solving. In Proceedings of the eleventh annual international conference on international computing education research (pp. 21–29). ACM.

    Google Scholar 

  • Moskal, B., Lurie, D., & Cooper, S. (2004). Evaluating the effectiveness of a new instructional approach. ACM SIGCSE Bulletin, 36(1), 75–79.

    CrossRef  Google Scholar 

  • Parsons, D., & Haden, P. (2006). Parson’s programming puzzles: A fun and effective learning tool for first programming courses. In Proceedings of the 8th Australasian conference on computing education (Vol. 52, pp. 157–163). Australian Computer Society.

    Google Scholar 

  • Pea, R. D. (1987). Socializing the knowledge transfer problem. International Journal of Educational Research, 11(6), 639–663.

    Google Scholar 

  • Pellegrino, J. W., & Hilton, M. L. (Eds.). (2013). Education for life and work: Developing transferable knowledge and skills in the 21st century. Washington, DC: National Academies.

    Google Scholar 

  • Robins, A., Rountree, J., & Rountree, N. (2003). Learning and teaching programming: A review and discussion. Computer Science Education, 13(2), 137–172.

    Google Scholar 

  • Schwartz, D. L., & Arena, D. (2013). Measuring what matters most: Choice-based assessments for the digital age. Cambridge, MA: MIT.

    Google Scholar 

  • Schwartz, D. L., & Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129–184.

    CrossRef  Google Scholar 

  • Schwartz, D. L., Bransford, J. D., & Sears, D. (2005). Efficiency and innovation in transfer. In J. Mestre (Ed.), Transfer of learning from a modern multidisciplinary perspective (pp. 1–51). Greenwich, CT: Information Age.

    Google Scholar 

  • Schwartz, D. L., Chase, C. C., & Bransford, J. D. (2012). Resisting overzealous transfer: Coordinating previously successful routines with needs for new learning. Educational Psychologist, 47(3), 204–214.

    CrossRef  Google Scholar 

  • Scott, J. (2013). The royal society of Edinburgh/British computer society computer science exemplification project. In Proceedings of ITiCSE’13 (pp. 313–315).

    Google Scholar 

  • Spohrer, J. C., & Soloway, E. (1986). Novice mistakes: Are the folk wisdoms correct? Communications of the ACM, 29(7), 624–632.

    CrossRef  Google Scholar 

  • SRI International (2013). Exploring CS curricular mapping. Retrieved from http://pact.sri.com.

    Google Scholar 

  • Weintrop, D., Beheshti, E., Horn, M. S., Orton, K., Trouille, L., Jona, K., & Wilensky, U. (2014). Interactive assessment tools for computational thinking in High School STEM classrooms. In Intelligent Technologies for Interactive Entertainment (pp. 22–25). Springer International Publishing.

    Google Scholar 

  • Werner, L., Denner, J., & Campe, S. (2015). Children programming games: a strategy for measuring computational learning. ACM Transactions on Computing Education (TOCE), 14(4), 24.

    Google Scholar 

  • Werner, L., Denner, J., Campe, S., & Kawamoto, D. C. (2012). The fairy performance assessment: Measuring computational thinking in Middle School. In Proceedings of the 43rd ACM technical symposium on computer science education (SIGCSE ‘12) (pp. 215–220). New York, NY: ACM.

    Google Scholar 

  • Werner, L., McDowell, C., & Denner, J. (2013). A first step in learning analytics: Pre-processing low-level Alice logging data of Middle School students. JEDM-Journal of Educational Data Mining, 5(2), 11–37.

    Google Scholar 

  • Whitehouse.gov (2016). Computer science for all. Retrieved from https://www.whitehouse.gov/the-press-office/2016/01/30/weekly-address-giving-every-student-opportunity-learn-through-computer.

  • Wing, J. (2006). Computational thinking. Communications of the ACM, 49(3), 33–36.

    CrossRef  Google Scholar 

  • Yadav, A., Burkhart, D., Moix, D., Snow, E., Bandaru, P., & Clayborn, L. (2015). Sowing the seeds: A landscape study on assessment in secondary computer science education. New York, NY: Computer Science Teacher Association.

    Google Scholar 

  • Zur Bargury, I. (2012). A new curriculum for junior-high in computer science. In Proceedings of the 17th ACM annual conference on innovation and technology in computer science education (pp. 204–208). ACM.

    Google Scholar 

  • Zur Bargury, I., Pârv, B. & Lanzberg, D. (2013). A nationwide exam as a tool for improving a new curriculum. In Proceedings of ITiCSE’13 (pp. 267–272). Canterbury, England, UK.

    Google Scholar 

Download references

Acknowledgments

The research described in this chapter was part of my Ph.D. dissertation at Stanford University. This effort benefited immensely from the support and guidance from my advisors and members of my doctoral committee. I am very grateful for the intellectual contributions of Dr. Roy Pea, Dr. Daniel Schwartz, and Dr. Brigid Barron at the Stanford Graduate School of Education, and Dr. Stephen Cooper at the Department of Computer Science, Stanford University. I would also like to acknowledge the support of the school district, principal, classroom teacher, and students who participated in this research. This project was funded by a grant from the National Science Foundation (#1343227).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuchi Grover .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Grover, S. (2017). Assessing Algorithmic and Computational Thinking in K-12: Lessons from a Middle School Classroom. In: Rich, P., Hodges, C. (eds) Emerging Research, Practice, and Policy on Computational Thinking. Educational Communications and Technology: Issues and Innovations. Springer, Cham. https://doi.org/10.1007/978-3-319-52691-1_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-52691-1_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-52690-4

  • Online ISBN: 978-3-319-52691-1

  • eBook Packages: EducationEducation (R0)