Advertisement

Enhancing Digital Simulated Laboratory Assessments: a Test of Pre-Laboratory Activities with the Learning Error and Formative Feedback Model

  • Man-Wai ChuEmail author
  • Jacqueline P. Leighton
Article
  • 31 Downloads

Abstract

Digitally simulated laboratory assessments (DSLAs) may be used to measure competencies such as problem solving and scientific inquiry because they provide an environment that allows the process of learning to be captured. These assessments provide many benefits that are superior to traditional hands-on laboratory tasks; as such, it is important to investigate different ways to maximize the potential of DSLAs in increasing student learning. This study investigated two enhancements—a pre-laboratory activity (PLA) and a learning error intervention (LEI)—that are hypothesized to enhance the use of DSLAs as an educational tool. The results indicate students who were administered the PLA reported statistically lower levels of test anxiety when compared to their peers who did not receive the activity. Furthermore, students who received the LEI scored statistically higher scores on the more difficult problems administered during and after the DSLA. These findings provide preliminary evidence that both a PLA and LEI may be beneficial in improving students’ performance on a DSLA. Understanding the benefits of these enhancements may help educators better utilize DSLAs in the classroom to improve student science achievement.

Keywords

Digitally simulated laboratory assessment Pre-laboratory activity Learning error and formative feedback (LEAFF) model 

Notes

Funding

This study was funded by Social Sciences and Humanities Research Council (grant number 435-2016-0114) to Dr. Jacqueline Leighton.

Compliance with Ethical Standards

Conflict of Interest

Dr. Man-Wai Chu declares that she has no conflict of interest. Dr. Jacqueline P. Leighton declares that she has no conflict of interest.

Informed Consent

Informed consent was obtained from all individual participants’ parents/guardians included in the study. Student assent was also obtained from all individual participants included in the study.

Ethical Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

The ethics board approvals from the institution where the corresponding author worked (No. Pro00040790) and school district in which students were enrolled were obtained.

Supplementary material

10956_2018_9763_MOESM1_ESM.docx (138 kb)
ESM 1 (DOCX 138 kb)

References

  1. American Educational Research Association [AERA], American Psychological Association [APA], & National Council on Measurement in Education [NCME]. (2014). Standards for educational and psychological testing. Washington, DC: Author.Google Scholar
  2. Bandalos, D. L., Yates, K., & Thorndike-Christ, T. (2005). Effects of math self-concept, perceived self-efficacy, and attributions for failure and success on test anxiety. J Educ Psychol, 87(4), 611–623.  https://doi.org/10.1037/0022-0663.87.4.611.CrossRefGoogle Scholar
  3. Bennett, R.E., Persky, H., Weiss, A.R., & Jenkins, F. (2007). Problem solving in technology-rich environments: A report from the NAEP technology-based assessment project (NCES 2007–466). U.S. Department of Education. Washington, DC: National Center for Education Statistics Retrieved from http://nces.ed.gov/nationsreportcard/pubs/studies/2007466.asp
  4. Chittleborough, G. D., Mocerino, M., & Treagust, D. F. (2007). Achieving greater feedback and flexibility using online pre-laboratory exercises with non-major chemistry students. J Chem Educ, 84(5), 884–888.  https://doi.org/10.1021/ed084p884.CrossRefGoogle Scholar
  5. Chu, M-W. (2017). Using computer simulated science laboratories: A test of pre-laboratory activities with the learning error and formative feedback model. Edmonton: Unpublished doctoral dissertation, University of Alberta.Google Scholar
  6. Chu, M-W., & Leighton, J. P. (2016). Using errors to enhance learning and feedback in computer programming. In S. Tettegah & M. P. McCreery (Eds.), Emotions, technology, and learning (pp. 89-117). London Wall, London: Elsevier Incorporated.Google Scholar
  7. Chu, M-W., Guo, Q., & Leighton, J. P. (2013). Students’ interpersonal trust and attitudes towards standardized tests: Exploring affective variables related to student assessment. Assessment in Education: Principles, Policy & Practice, 21(2), 167-192.  https://doi.org/10.1080/0969594X.2013.844094.
  8. Credé, M., & Phillips, L. A. (2011). A meta-analytic review of the motivated strategies for learning questionnaire. Learn Individ Differ, 13(4), 337–346.  https://doi.org/10.1016/j.lindif.2011.03.002.CrossRefGoogle Scholar
  9. De Jong, T., & van Joolingen, W. R. (1998). Scientific discovery learning with computer simulations of conceptual domains. Rev Educ Res, 68(2), 179–201.  https://doi.org/10.3102/00346543068002179.CrossRefGoogle Scholar
  10. Duncan, T. G. & McKeachie, W. J. (2005). The making of the motivated strategies for learning questionnaire. Educational Psychologist, 40(2), 117–128.  https://doi.org/10.1207/s15326985ep4002_6.
  11. Dwyer, W. M., & Lopez, V. E. (2001). Simulations in the learning cycle: a case study involving “exploring the Nardoo”. Paper presented at the National Educational Computing Conference, “Building on the Future”, Chicago, IL. Retrieved from http://files.eric.ed.gov/fulltext/ED462932.pdf
  12. Firestein, S. (2016) Failure: why science is so successful. New York: Oxford University Press.Google Scholar
  13. Fredericks, J. A., Blumenfeld, P., Friedel, J., & Paris, A. (2005). School engagement. In K. A. Moore & L. Lippman (Eds.), What do children need to flourish?: conceptualizing and measuring indicators of positive development. New York, NY: Springer Science and Business Media.Google Scholar
  14. Gobert, J. D., Sao Pedro, M., Raziuddin, J., & Baker, R. S. (2013). From log files to assessment metrics: measuring students' science inquiry skills using educational data mining. J Learn Sci, 22(4), 521–563.CrossRefGoogle Scholar
  15. Gobert, J., O’Dwyer, L., Horwitz, P., Buckley, B., Levy, S. T., & Wilensky, U. (2011). Examining the relationship between students’ epistemologies of models and conceptual learning in three science domains: biology, physics, & chemistry. International Journal of Science Education, 33(5), 653–684.  https://doi.org/10.1080/09500691003720671.CrossRefGoogle Scholar
  16. Gravetter, F. J. and Wallnau, L. B. (2017). Statistics for the behavioral sciences (10th Ed). Belmont, CA: Wadsworth Cengage Learning.Google Scholar
  17. Guo, Q., Cui, Y. (2017). LSTM cluster: a novel way to cluster students’ problem solving sequences. Paper session presented at the National Council on Measurement in Education, San Antonio, Texas.Google Scholar
  18. Hodapp, V., & Benson, J. (1997). The multidimensionality of test anxiety: a test of different models. Anxiety Stress Coping, 10(3), 219–244.  https://doi.org/10.1145/1132960.1132961.CrossRefGoogle Scholar
  19. Hodson, D. (1996). Laboratory work as scientific method: three decades of confusion and distortion. J Curric Stud, 28(2), 115–135.  https://doi.org/10.1080/0022027980280201.CrossRefGoogle Scholar
  20. Hofstein, A., & Lunetta, V. N. (2003). The laboratory in science education: foundations for the twenty-first century. Sci Educ, 88(1), 28–54.  https://doi.org/10.1002/sce.10106.CrossRefGoogle Scholar
  21. Kyllonen, P., C. (2017). Socio-emotional and self-management variables in learning and assessment. In A. A. Rupp & J. P. Leighton (Eds.), The handbook of cognition and assessment: frameworks, methodologies, and applications (pp. 174–197). Malden: John Wiley & Sons.Google Scholar
  22. Leighton, J. P., Chu, M-W., & Seitz, P. (2013). Cognitive diagnostic assessment and the learning errors and formative feedback (LEAFF) model. In R. Lissitz (Ed.), Informing the practice of teaching using formative and interim assessment: A systems approach (pp. 183-207). Information Age Publishing.Google Scholar
  23. Leighton, J.P., & Bustos Gomez, M. C. (2018). A pedagogical alliance for trust, wellbeing, and the identification of errors for learning and formative assessment. Educational Psychology: An International Journal of Experimental Educational Psychology.  https://doi.org/10.1080/01443410.2017.1390073.
  24. Ma, J., & Nickerson, J. V. (2006). Hands-on, simulated, and remote laboratories: a comparative literature review. ACM Comput Surv, 38(3), Article 7.  https://doi.org/10.1145/1132960.1132961.
  25. Mayrath, M., Clarke-Midura, J., & Robinson, D. (Eds.). (2012). Technology based assessment for 21st century skills: theoretical and practical implications from modern research. New York: Springer-Verlag.Google Scholar
  26. Midgley, C., Maehr, M. L., Hruda, L. Z., Anderman, E., Anderman, L., Freeman, K. E., … Urdan, T. (2000). Manual for the patterns of adaptive learning scales. Ann Arbor, MI: University of Michigan Press.Google Scholar
  27. National Assessment of Educational Progress [NAEP]. (2007). NAEP technology-rich environment simulation scenario [Computer software]. Retrieved from https://nces.ed.gov/nationsreportcard/studies/tba/tre/sim-description.aspx
  28. National Research Council. (2014). Developing assessments for the next generation science standards. Committee on Developing Assessments of Science Proficiency in K-12. Board on Testing and Assessment and Board on Science Education, J. W. Pellegrino, M. R. Wilson, J. A. Koenig, and A. S. Beatty (Eds.), Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press. Retrieved from http://www.nap.edu/catalog.php?record_id=18409
  29. Next Generation Science Standards [NGSS] Lead States. (2013). Next generation science standards: for states, by states. Washington, DC: The National Academies Press. Retrieved from http://www.nap.edu/catalog.php?record_id=18290
  30. PhET. (2017). PhET interactive simulations: research. Retrieved from https://phet.colorado.edu/en/research
  31. Pintrich, P. R., Smith, D. A. F., Garcia, T., & Mckeachie, W. J. (1991). Reliability and predictive validity of the motivational strategies for learning questionnaire (MSLQ). Educ Psychol Meas, 53(3), 801–813.  https://doi.org/10.1177/0013164493053003024.CrossRefGoogle Scholar
  32. Quellmalz, E. S., Timms, M. J., & Buckley, B. C. (2009). Using science simulations to support powerful formative assessments of complex science learning. Retrieved from http://simscientists.org/downloads/Quellmalz_Formative_Assessment.pdf
  33. Reid, N., & Shah, I. (2007). The role of laboratory work in university chemistry. Chemistry Education Research and Practice, 8(2), 172–185 Retrieved from http://www.rsc.org/images/Reid%20paper%20final_tcm18-85040.pdf.CrossRefGoogle Scholar
  34. Ross, M. E., Shannon, D. M., Salisbury-Glennon, J. D., & Guarino, A. (2002). The patterns of adaptive learning survey: a comparison across grade levels. Educ Psychol Meas, 62(3), 483–497.  https://doi.org/10.1177/00164402062003006.CrossRefGoogle Scholar
  35. Sahin, S. (2006). Computer simulations in science education: implications for distance education. Turk Online J Dist Educ, 7(4), 132–146 Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.114.8977&rep=rep1&type=pdf.Google Scholar
  36. Scalise, K., Timms, M., Clark, L., & Moorjani, A. (2009). Student learning in science simulations. What makes a difference? Paper presented at the Session on Conversation, Argumentation, and Engagement in Science Learning during the Annual Conference of the American Educational Research Association. Retrieved from http://works.bepress.com/michael_timms/18/.
  37. Shute, V. J., & Ventura, M. (2013). Measuring and supporting learning in games: stealth assessment. Cambridge, MA: Massachusetts Institute of Technology Press Retrieved from http://myweb.fsu.edu/vshute/pdf/white.pdf.Google Scholar
  38. Strauss, R., & Kinzie, M. B. (1994). Student achievement and attitudes in a pilot study comparing an interactive videodisc simulation to conventional dissection. Am Biol Teach, 56(7), 398–402.  https://doi.org/10.2307/4449869.CrossRefGoogle Scholar
  39. Supasorn, S., Suits, J. P., Jones, L. L., & Vibuljan, S. (2008). Impact of a pre-laboratory computer simulation of organic extraction on comprehension and attitudes of undergraduate chemistry students. Chemistry Education Research and Practice, 9(2), 169–181.  https://doi.org/10.1039/b806234j.CrossRefGoogle Scholar
  40. Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics (6th edition). Boston: Pearson/Allyn & Bacon.Google Scholar
  41. Tulis, M., Steuer, G., & Dresel, M. (2016). Learning from errors: a model of individual processes. Frontline Learn Res, 4(2), 12–26.CrossRefGoogle Scholar
  42. Vitale, J. M., Lai, K., & Linn, M. C. (2015). Taking advantage of automated assessment of student-constructed graphs in science. J Res Sci Teach, 52(10), 1426–1450.  https://doi.org/10.1002/tea.21.CrossRefGoogle Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.University of CalgaryCalgaryCanada
  2. 2.University of AlbertaEdmontonCanada

Personalised recommendations