Advertisement

Changing views on Assessment for STEM Project-Based Learning

  • Robert M. Capraro
  • M. Sencer Corlu

Abstract

Science, Technology, Engineering, and Mathematics (STEM) Project-Based Learning (PBL) integrates assessment methods across different aspects of learning experiences. While STEM PBL shifts the focus of attention from summative to formative assessment, a greater attention is given to the interpersonal domain. Because of the nature of STEM PBL, which is centered on developing real-world projects where students can apply their understandings of various concepts, authentic assessment underlies both formative and summative assessment tasks through technology, such as classroom response systems, and rubrics. Authentic assessment in STEM PBL helps students transition from an authority-imposed regulation to the self-regulation of their learning. Therefore, assessment in STEM PBL is inextricably interwoven with pedagogy through integrated assessment methods that develop the whole person, stimulate creativity, and foster individualized group responsibility.

Keywords

Professional Development Formative Assessment Assessment Task Summative Assessment Individual Accountability 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Andrade, H. G. (2000). Using rubrics to promote thinking and learning. Educational Leadership, 57(5), 13–18.Google Scholar
  2. Andrade, H. G. (n.d.). Understanding rubrics. Retrieved from http://learnweb.harvard.edu/ALPS/thinking/docs/rubricar.htm.
  3. Ashcroft, K., & Palacio, D. (1996). Researching into assessment and evaluation in colleges and universities. London, UK: Kogan Page.Google Scholar
  4. Boaler, J. (1998). Open and closed mathematics approaches: Student experiences and understandings. Journal for Research in Mathematics Education, 29, 41–62.CrossRefGoogle Scholar
  5. Brophy, J. (2004). Motivating students to learn (2nd ed.). Mahwah, NJ: Erlbaum.Google Scholar
  6. Cavanaugh, S. (2006, November 15). Technology helps teachers home in on student needs. Education Week, 26(24), p.12.Google Scholar
  7. Duncan, D. (2005). Clickers in the classroom: How to enhance science teaching using classroom response systems. San Francisco, CA: Addison Wesley/Pearson.Google Scholar
  8. Falchikov, N. (1995). Peer feedback marking: Developing peer assessment. Innovations in Education and Teaching International, 32, 175–187.CrossRefGoogle Scholar
  9. Guskey, T. R. (2002). Does it make a difference?: Evaluating professional development. Educational Leadership, 59(6), 46–51.Google Scholar
  10. Klum, G. (1994). Mathematics assessment. What works in the classroom. San Francisco, CA: Jossey-Bass.Google Scholar
  11. Moursund, D. (n.d.). Part 7: Assessment. Retrieved June 1, 2012, from http://www.uoregon.edu/~moursund/PBL/part_7.htm.
  12. O’Malley, K. J., Moran, B. J., Haidet, P., Seidel, C. L., Schneidr, V., Morgan, R. O., Kelly, P. A., & Richards, B. (2003). Validation of an observation instrument for measuring student engagement in health professions settings. Evaluation & Health Professions, 26(1), 86–103.CrossRefGoogle Scholar
  13. Patrick, P. (2009). Professional development that fosters classroom application. Modern Language Journal, 93, 280–287.CrossRefGoogle Scholar
  14. Peckham, G., & Sutherland, L. (2000). The role of self-assessment in moderating students’ expectation. South African Journal for Higher Education, 14(1), 75–78.Google Scholar
  15. Sanders, W. L., & Rivers, J. C. (1996). Cumulative and residual effects of teachers on future students’ academic achievement. Knoxville: University of Tennessee, Value-Added Research and Assessment Center.Google Scholar
  16. Secretary’s Commission on Achieving Necessary Skills. (2000). What work requires of schools: A SCANS report for America 2000. Washington DC: U.S. Department of Labor.Google Scholar
  17. Simon, A., & Boyer, E. G. (1969). Mirrors for behavior, An anthology of classroom observation instruments. ERIC document Reproduction No. 031613.Google Scholar
  18. Solomon, G. (2003). Project-based learning: A primer. Technology & Learning, 23(6), 20–30.Google Scholar
  19. Stearns, L. M., Morgan, J., Capraro, M. M., & Capraro, R. M. (2012). The development of a teacher observation instrument for PBL classroom instruction. Journal of STEM Education: Innovations and Research, 13(3), 25–34.Google Scholar
  20. Taylor-Powell, E., & Steele, S. (1996). Colleting evaluation data: Direct observation. Program development and evaluation. University of Wisconsin, Cooperative Extension-Program Development and Evaluation. Retrieved from http://cecommerce.uwex.edu/pdfs/G3658_5.PDF
  21. VanTassel-Baska, J., Feng, A. X., Brown, E., Bracke, B., Stambaugh, T., French, H., & Bai, W. (2008). A study of differentiated instructional change over 3 years. The Gifted Child Quarterly, 52, 297–312.CrossRefGoogle Scholar
  22. Wright, R. J. (2008). Educational assessment: Tests and measurement in the age of accountability. Thousand Oaks, CA: Sage.Google Scholar
  23. Zimmaro, D. M. (2004). Developing grading rubrics. Retrieved June 1, 2008, from the University of Texas at Austin, Division of Instructional Innovation and Assessment Web site: http://www.utexas.edu/academic/mec/research/pdf/rubricshandout.pdf

Copyright information

© Sense Publishers 2013

Authors and Affiliations

  • Robert M. Capraro
    • 1
  • M. Sencer Corlu
    • 2
  1. 1.Department of Teaching, Learning and CultureTexas A&M UniversityUSA
  2. 2.Graduate School of EducationBilkent UniversityTurkey

Personalised recommendations