Skip to main content
Log in

Factors affecting feeling-of-knowing in a medical intelligent tutoring system: the role of immediate feedback as a metacognitive scaffold

  • Published:
Advances in Health Sciences Education Aims and scope Submit manuscript

Abstract

Previous studies in our laboratory have shown the benefits of immediate feedback on cognitive performance for pathology residents using an intelligent tutoring system (ITS) in pathology. In this study, we examined the effect of immediate feedback on metacognitive performance, and investigated whether other metacognitive scaffolds will support metacognitive gains when immediate feedback is faded. Twenty-three participants were randomized into intervention and control groups. For both groups, periods working with the ITS under varying conditions were alternated with independent computer-based assessments. On day 1, a within-subjects design was used to evaluate the effect of immediate feedback on cognitive and metacognitive performance. On day 2, a between-subjects design was used to compare the use of other metacognitive scaffolds (intervention group) against no metacognitive scaffolds (control group) on cognitive and metacognitive performance, as immediate feedback was faded. Measurements included learning gains (a measure of cognitive performance), as well as several measures of metacognitive performance, including Goodman–Kruskal gamma correlation (G), bias, and discrimination. For the intervention group, we also computed metacognitive measures during tutoring sessions. Results showed that immediate feedback in an intelligent tutoring system had a statistically significant positive effect on learning gains, G and discrimination. Removal of immediate feedback was associated with decreasing metacognitive performance, and this decline was not prevented when students used a version of the tutoring system that provided other metacognitive scaffolds. Results obtained directly from the ITS suggest that other metacognitive scaffolds do have a positive effect on G and discrimination, as immediate feedback is faded. We conclude that immediate feedback had a positive effect on both metacognitive and cognitive gains in a medical tutoring system. Other metacognitive scaffolds were not sufficient to replace immediate feedback in this study. However, results obtained directly from the tutoring system are not consistent with results obtained from assessments. In order to facilitate transfer to real-world tasks, further research will be needed to determine the optimum methods for supporting metacognition as immediate feedback is faded.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Aleven, V., & Koedinger, K. R. (2002). An effective metacognitive strategy: Learning by doing and explaining with a computer-based cognitive tutor. Cognitive Science, 26, 147–181.

    Article  Google Scholar 

  • Anderson, J. R. (1993). Rules of the mind. Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Azevedo, R., Cromley, J. G., & Seibert, D. (2004). Does adaptive scaffolding facilitate student’s ability to regulate their learning with hypermedia? Contemporary Educational Psychology, 29(3), 344–370. doi:10.1016/j.cedpsych.2003.09.002.

    Article  Google Scholar 

  • Azevedo, R., & Hadwin, A. F. (2005). Scaffolding self-regulated learning and metacognition-implications for the design of computer-based scaffolds. Instructional Science, 33, 367–379. doi:10.1007/s11251-005-1272-9.

    Article  Google Scholar 

  • Azevedo, R., & Lajoie, S. (1998). The cognitive basis for the design of a mammography interpretation tutor. International Journal of Artificial Intelligence in Education, 9, 32–44.

    Google Scholar 

  • Azevedo, R., Moos, D., Greene, J., Winters, F., & Cromley, J. (2008). Why is externally-facilitated regulated learning more effective than self-regulated learning with hypermedia? Educational Technology Research and Development, 56(1), 45–72. doi:10.1007/s11423-007-9067-0.

    Article  Google Scholar 

  • Azevedo, R., & Witherspoon, A. M. (2008). Self-regulated learning with hypermedia. In A. Graesser, J. Dunlosky, D. Hacker (Eds.), Handbook of metacognition in education (in press). Mahwah, NJ: Erlbaum.

  • Azevedo, R., & Witherspoon, A. M. (2009) Self-regulated use of hypermedia. In A. Graesser, J. Dunlosky, D. Hacker (Eds.), Handbook of metacognition in education (in press). Mahwah, NJ: Erlbaum.

  • Balzer, W., Doherty, M., & O’Conner, R. (1989). Effects of cognitive feedback on performance. Psychological Bulletin, 106, 410–433. doi:10.1037/0033-2909.106.3.410.

    Article  Google Scholar 

  • Butler, D., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65(3), 245–281.

    Article  Google Scholar 

  • Carver, C., & Scheier, M. (1990). Origins and functions of positive and negative affect: A control-process view. Psychological Review, 97, 19–35. doi:10.1037/0033-295X.97.1.19.

    Article  Google Scholar 

  • Choi, I., Land, S. M., & Turgeon, A. Y. (2005). Scaffolding peer-questioning strategies to facilitate metacognition during online small group discussion. Instructional Science, 33, 483–511. doi:10.1007/s11251-005-1277-4.

    Article  Google Scholar 

  • Clancy, W. (1983). Knowledge-based tutoring: The GUIDON program. Journal of Computer-based Instruction, 10, 8–14.

    Google Scholar 

  • Corbett, A. T., & Anderson, J. R. (2001). Locus of feedback control in computer-based tutoring: impact on learning rate, achievement and attitudes. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 245–252). Seattle, WA: ACM NY.

  • Corbett, A. T., Koedinger, K. R., & Hadley, W. H. (2002). In Goodman, P. S. (Ed.), Cognitive tutors: From the research classroom to all classrooms—technology enhanced learning: Opportunities for change (pp. 198–224). Taylor & Francis.

  • Crowley, R. S., Legowski, E., Medvedeva, O., Tseytlin, E., Roh, E., & Jukic, D. (2007). Evaluation of an intelligent tutoring system in pathology: Effects of external representation on performance gains, metacognition, and acceptance. Journal of the American Medical Informatics Association, 14(2), 182–190. doi:10.1197/jamia.M2241.

    Article  Google Scholar 

  • Crowley, R. S., & Medvedeva, O. (2006). An intelligent tutoring system for visual classification problem solving. Artificial Intelligence in Medicine, 36(1), 85–117.

    Article  Google Scholar 

  • Dabbagh, N., & Kitsantas, A. (2005). Using web-based pedagogical tools as a scaffolds for self-regulated learning. Instructional Science, 33, 513–540. doi:10.1007/s11251-005-1278-3.

    Article  Google Scholar 

  • Graber, M. L., Franklin, N., & Gordon, R. (2005). Diagnostic error in internal medicine. Archives of Internal Medicine, 165(13), 1493–1499. doi:10.1001/archinte.165.13.1493.

    Article  Google Scholar 

  • Graesser, A., McNamara, D., & VanLehn, K. (2005). Scaffolding deep comprehension strategies through Pint & Query, AuthTutor and iSTRAT. Educational Psychologist, 40(4), 225–234. doi:10.1207/s15326985ep4004_4.

    Article  Google Scholar 

  • Green, B. A. (2000). Project-based learning with the world wide web: A qualitative study of resource integration. Educational Technology Research and Development, 48(1), 45–66. doi:10.1007/BF02313485.

    Article  Google Scholar 

  • Hill, J. R., & Hannafin, M. J. (1997). Cognitive strategies and learning from the world wide web. Educational Technology Research and Development, 45(4), 37–64. doi:10.1007/BF02299682.

    Article  Google Scholar 

  • Kelemen, W. L., Frost, P. J., & Weaver, C. A., I. I. I. (2000). Individual differences in metacognition: Evidence against a general metacognitive ability. Memory & Cognition, 28(1), 92–107.

    Article  Google Scholar 

  • Kulhavy, R. W., & Stock, W. A. (1989). Feedback in written instruction: The place of response certitude. Educational Psychology Review, 1, 279–308. doi:10.1007/BF01320096.

    Article  Google Scholar 

  • Kulhavy, R. W., Yekovich, F. R., & Dyer, J. W. (1979). Feedback and content review in programmed instruction. Contemporary Educational Psychology, 4, 91–98. doi:10.1016/0361-476X(79)90062-6.

    Article  Google Scholar 

  • Loboda, T. D., & Brusilovsky, P. (2006). WADEIn II: Adaptive explanatory visualization for expressions evaluation. Proceedings of the 2006 ACM Symposium on Software Visualization. Brighton, UK: ACM.

  • Maries, A., & Kumar, A. (2008). The effect of student model on learning. Advanced learning technologies. ICALT’08, 8th IEEE International Conference on 2008 (pp. 877–881).

  • Mattheos, N., Nattestad, A., Falk-Nilsson, E., & Attström, R. (2004). The interactive examination: Assessing students’ self-assessment ability. Medical Education, 38(4), 378–389. doi:10.1046/j.1365-2923.2004.01788.x.

    Article  Google Scholar 

  • Metcalf, J., & Dunlosky, J. (2008). Metamemory. In H. Roediger (Ed.), Cognitive psychology of memory (Vol. 2). Oxford: Elsevier.

    Google Scholar 

  • Mitrovic, A., & Martin, B. (2002). Evaluating the effects of open student models on learning (Vol. 2347/2002, pp. 296–305). Berlin/Heidelberg: Springer.

    Google Scholar 

  • Nelson, T. (1984). A comparison of current measures of the accuracy of feeling-of-knowing predictions. Psychological Bulletin, 95(1), 109–133. doi:10.1037/0033-2909.95.1.109.

    Article  Google Scholar 

  • Nelson, T., & Narens, L. (1990). Metamemory: A theoretical framework and some new findings. In G. Bower (Ed.), The psychology of learning and motivation. San Diego, CA: Academic Press.

    Google Scholar 

  • Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom. Journal of Educational Psychology, 82(1), 33–40. doi:10.1037/0022-0663.82.1.33.

    Article  Google Scholar 

  • Puntambekar, S., & Hubscher, R. (2005). Tools for scaffolding students in a complex learning environment: What have we gained and what have we missed? Educational Psychologist, 40, 1–12. doi:10.1207/s15326985ep4001_1.

    Article  Google Scholar 

  • Reiser, B. (2004). Scaffolding complex learning: The mechanisms of structuring and problematizing student work. Journal of the Learning Sciences, 13(3), 273–304. doi:10.1207/s15327809jls1303_2.

    Article  Google Scholar 

  • Saadawi, G. M., Tseytin, E., Legowski, E., Jukic, D., Castine, M., Crowley, R. S. (2008). A natural language intelligent tutoring system for training pathologists: implementation and evaluation. Advances in Health Sciences Education: Theory and Practice, 13, 709–722.

    Article  Google Scholar 

  • Sharples, M., Jeffery, N., du Boulay, B., Teather, B., Teather, D., & du Boulay, G. (2000). Structured computer-based training in the interpretation of neuroradiological images. International Journal of Medical Informatics, 60, 263–280. doi:10.1016/S1386-5056(00)00101-5.

    Article  Google Scholar 

  • Smith, P., Obradovich, J., Heintz, P., et al. (1998). Successful use of an expert system to teach diagnostic reasoning for antibody identification. Proceedings of the 4th International Conference on Intelligent Tutoring Systems (pp. 54–63). San Antonio, TX.

  • VanLehn, K. (2006). The behavior of tutoring systems. International Journal of Artificial Intelligence in Education, 16(3), 227–265.

    Google Scholar 

  • Voytovich, A. E., Rippey, R. M., & Suffredini, A. (1985). Premature conclusions in diagnostic reasoning. Journal of Medical Education, 60, 302–307.

    Google Scholar 

  • Wenger, A. (1987). Artificial intelligence and tutoring systems-computational and cognitive approaches to the communication of knowledge. Los Altos, CA: Morgan Kaufmann Publishers Inc.

    Google Scholar 

  • White, B., & Frederiksen, J. (2005). A theoretical framework and approach for fostering metacognitive development. Educational Psychologist, 40(4), 211–223. doi:10.1207/s15326985ep4004_3.

    Article  Google Scholar 

  • Winne, P. H. (1982). Minimizing the black box problem to enhance the validity of theories about instructional effects. Instructional Science, 11, 13–28. doi:10.1007/BF00120978.

    Article  Google Scholar 

  • Winne, P. H. (2001). Self-regulated learning viewed from models of information processing. In J. Douglas, J. D. Hacker, & A. C. Graesser (Eds.), Self-regulated learning and academic achievement: Theoretical perspective (pp. 153–190). Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In J. Douglas, J. D. Hacker, & A. C. Graesser (Eds.), Metacognition in educational theory and practice (pp. 277–304). Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Winne, P., & Hadwin, A. (Eds.). (2008). The weave of motivation and self-regulated learning (pp. 297–314). NY: Taylor & Francis.

    Google Scholar 

  • Winne, P. H., & Marx, R. W. (1982). Students’ and teachers’ views of thinking processes for classroom learning. The Elementary School Journal, 82, 493–518. doi:10.1086/461284.

    Article  Google Scholar 

  • Woo, C. W., Evens, M. W., Freedman, R., et al. (2006). An intelligent tutoring system that generates a natural language dialogue using dynamic multi-level planning. Artificial Intelligence in Medicine, 38(1), 25–46. doi:10.1016/j.artmed.2005.10.004.

    Article  Google Scholar 

  • Yudelson, M. V., Medvedeva, O. P., & Crowley, R. S. (2008). Multifactor approach to student model evaluation in a complex cognitive domain. User Modeling and User-Adapted Interaction, 18(4), 315–382.

    Article  Google Scholar 

  • Yudelson, M. V., Medvedeva, O., Legowski, E., Castine, M., Jukic, D., & Crowley, R. S. (2006). Mining student learning data to develop high lever pedagogic strategy in a medical ITS. Proceedings of AAAI Educational Data Mining 21st National Conference Educational Data Mining Workshop (pp. 82–90). Boston MA: AAAI press.

  • Zimmerman, B. (2006). Development and adaptation of expertise: The role of self-regulatory processes and beliefs. In K. A. Ericsson, P. Charness, P. Feltovich, & R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance (pp. 705–722). New York: Cambridge University Press.

    Chapter  Google Scholar 

Download references

Acknowledgments

Work on this project was supported by a grant from the National Library of Medicine (R01 LM007891). The work was conducted using the Protégé resource, which is supported by grant LM007885 from the United States National Library of Medicine. We thank Lucy Cafeo for editorial assistance.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rebecca S. Crowley.

Appendices

Appendix 1: metacognitive pseudo-dialog

During case

  1. 1.

    Having a strategy usually helps in solving any problem. What sentence would best describe your strategy at the beginning of a case?

    1. a.

      I try to place the case first in a general category of disease.

    2. b.

      I try to remember a similar case.

    3. c.

      I try to look at the whole slide and then focus on finding features.

    4. d.

      I like to hypothesize and then confirm or dispute my hypothesis.

  2. 2.

    At any given time in problem solving, it is usually helpful to have a strategy in place. What sentence would best describe your strategy during viewing a case?

    1. a.

      I try to look for a pathognomonic feature that would narrow the differential diagnosis as early as possible.

    2. b.

      I have a systematic approach that would narrow the differential diagnosis in a gradual step by step fashion.

    3. c.

      I identify all the features in the order I see them until I come up with a differential diagnosis.

    4. d.

      I ask for help to have an idea of features I should be looking for in the case.

    5. e.

      I try to place the case in a general schema (visual representation in my head) of concepts and diagnoses.

  3. 3.

    What is the specific purpose for the strategy you are using?

    1. a.

      This strategy has worked in the past.

    2. b.

      This strategy is effective in this case.

    3. c.

      I don’t know of another strategy.

    4. d.

      I don’t have any strategy.

  4. 4.

    Did you set specific goals before viewing this case? What are they?

  5. 5.

    What features are most important to identify in this case? Why are they important?

  6. 6.

    Did you ask yourself questions about the case before you began? What were these questions?

  7. 7.

    What sentence would best describe your goals at this point?

    1. a.

      I have a hypothesis but I am not able to confirm or dispute it.

    2. b.

      I am trying to locate all the features on the slide before making a hypothesis.

    3. c.

      I am just looking around the slide trying to find something familiar.

    4. d.

      I am completely lost and I need help.

  8. 8.

    Did you slow down when you encountered important features like <random_feature>?

  9. 9.

    What features are possibly associated with the <random_hypothesis>?

  10. 10.

    One way that expertise can be acquired is to develop a schema or representation of concepts such as visual features and their relationships to different diagnoses. Do you have a schema (visual representation) in your head that helps you integrate the information you learn after each case?

  11. 11.

    Studies show the amount you learn is related to your motivation. How motivated are you to learn something new from the tutor?

  12. 12.

    It is important to identify ones intellectual strengths and build on them to account for ones weakness. What sentence would best describe your intellectual strengths and weaknesses?

    1. a.

      I am better at viewing the slide when I have studied and know the educational concepts for a case.

    2. b.

      I can learn how to diagnose the case even without knowing anything about the subject.

    3. c.

      I like to read a textbook before attempting to view cases.

    4. d.

      I cannot learn about a case until I have been taught the educational concepts for a case

  13. 13.

    Locking onto a salient feature at initial presentation and failing to shift from your first impression of the case is a common heuristic error used in medical decision making. Which of the following heuristics does this describe?

    1. a.

      Anchoring

    2. b.

      Pseudo-diagnosticity

    3. c.

      Satisficing

    4. d.

      Representativeness

    5. e.

      I do not know

  14. 14.

    A common heuristic error is the tendency to not search for other possible diagnoses once a satisfactory solution has been reached leading to premature diagnostic closure of the case. Which of the following heuristics does this describe?

    1. a.

      Anchoring

    2. b.

      Pseudo-diagnosticity

    3. c.

      Satisficing

    4. d.

      Representativeness

    5. e.

      I do not know

  15. 15.

    Seeking features that confirm an initial diagnosis but not seeking features that support a competing diagnosis is known as which of these options?

    1. a.

      Anchoring

    2. b.

      Pseudo-diagnosticity

    3. c.

      Satisficing

    4. d.

      Representativeness

    5. e.

      I do not know

  16. 16.

    A heuristic error commonly made through categorization of cases depending on one prototypical feature is known as which of these options?

    1. a.

      Anchoring

    2. b.

      Pseudo-diagnosticity

    3. c.

      Satisficing

    4. d.

      Representativeness

    5. e.

      I do not know

End of case

  1. 1.

    Are you consciously focusing your attention on important features? What are these features?

  2. 2.

    It is a good strategy to stop and review a case to help understand important relationships between features and diagnoses. How frequently are you reviewing the case to help understand these relationships?

  3. 3.

    Now that you have reached <random_diagnosis>, what sentence would best describe what you did?

    1. a.

      I think it would have been easier to reach the diagnosis if I had asserted a hypothesis earlier.

    2. b.

      I think I should have asked for more help.

    3. c.

      I think I should have studied the slide more carefully and identified more features before I attempted a hypothesis.

    4. d.

      I worked through the case efficiently and should not have done anything differently.

  4. 4.

    What sentence would best describe your asking for help?

    1. a.

      I ask for help only when I need it.

    2. b.

      I frequently ask for help to be sure of my work.

    3. c.

      I never ask for help.

    4. d.

      I only ask for help if everything else fails.

  5. 5.

    It is a good strategy to link relationships between features and diagnoses in different cases. Is a <random_feature> related to what you have already seen in previous cases? Explain.

  6. 6.

    It is a good strategy to stop and reevaluate your actions to ensure that they are consistent with your goal. How did you reevaluate <random_hypothesis>?

  7. 7.

    What sentence would best describe how you learn?

    1. a.

      I have a schema (visual representation in my head) for features and their relationships with a diagnosis.

    2. b.

      I just take each case individually.

    3. c.

      I try to remember previous cases and link them to what is in the slide.

    4. d.

      I try to remember what I read about the subject.

  8. 8.

    What sentence would best describe how correct you are in identifying the features and reaching a diagnosis?

    1. a.

      I am always sure when I am correct and when I am incorrect.

    2. b.

      Most of the time, I am sure when I am correct.

    3. c.

      Most of the time, I am sure when I am incorrect.

    4. d.

      I am never sure how correct or incorrect I am.

  9. 9.

    What sentence would best describe how much you learned in comparison to what the tutor expected you to learn in this case?

    1. a.

      I think I learned most of what the tutor wanted me to learn.

    2. b.

      I don’t think I learned what the tutor wanted me to learn.

    3. c.

      I think I got some of what the tutor wanted me to learn.

    4. d.

      I don’t think the tutor was teaching me anything new.

  10. 10.

    Did you ever lock onto a salient feature at initial presentation and fail to shift from your first impression of the case? This is a common heuristic error made in the medical field and is known as Anchoring.

  11. 11.

    Did you consider multiple hypotheses for the case? A common heuristic error known as Satisficing is the tendency to not search for other possible diagnoses once a satisfactory solution has been reached leading to premature diagnostic closure of the case.

  12. 12.

    When trying to reach your definitive diagnosis from the list of hypotheses, did you seek both confirming and disputing findings? A common heuristic error known as Pseudo-diagnosticity occurs when data is sought that confirms one but not other competing hypotheses.

  13. 13.

    Do you try to identify multiple features to support a hypothesis? Representativeness is one of the heuristic errors commonly made through categorization of cases depending on one prototypical feature.

Inspectable student model (knowledge explorer) questions at end of case

  1. 1.

    Using the Knowledge Explorer, can you tell if you were right or wrong about <wrong_diagnosis_or_hypothesis>? If you were wrong, what error did you make?

  2. 2.

    Looking at the Knowledge Explorer, what things have you seen but haven’t learned?

  3. 3.

    Click on the Self Check Summary tab. For items you were wrong or unsure about (refer to the self check column), how much knowledge does the tutor think you have about it?

Appendix 2: foil dialog

Early in case

  1. 1.

    Why do you need high power for the diagnosis of this case?

  2. 2.

    Why do you need low/medium power for the diagnosis of this case?

  3. 3.

    Do you have a set of feature(s) you look for in this kind of case? If yes, please list.

  4. 4.

    Why did you feel feature x was important in coming to a diagnosis?

  5. 5.

    What other feature(s) can you confuse with feature X?

  6. 6.

    What is the differential diagnosis you think of when you see feature X?

  7. 7.

    What are some important features related to hypothesis X?

  8. 8.

    Does the attribute Z of feature X have other values?

  9. 9.

    What are the hypotheses supported by feature X?

Later in case (have suggested at least one diagnosis):

  1. 1.

    What feature(s) do you think is the most crucial in coming to the diagnosis of this case?

  2. 2.

    What are some important features that you have learned in diagnosing diagnosis x?

  3. 3.

    What do you think is the stain used in this slide? When you sign-out, would you like to have more stains done on the same specimen?

  4. 4.

    What are the features easily identified by using H&E?

  5. 5.

    What are the features easily identified by using Pas D?

  6. 6.

    What are the features easily identified by using colloidal iron ?

  7. 7.

    If you had to sign out this case, are there any additional information/tests/that you would like to have completed? If so, why?

  8. 8.

    Are there any additional features that you would have liked to identify in coming to the diagnosis that were not in the tutor?

  9. 9.

    The diagnosis X is supported by presence of what features?

Rights and permissions

Reprints and permissions

About this article

Cite this article

El Saadawi, G.M., Azevedo, R., Castine, M. et al. Factors affecting feeling-of-knowing in a medical intelligent tutoring system: the role of immediate feedback as a metacognitive scaffold. Adv in Health Sci Educ 15, 9–30 (2010). https://doi.org/10.1007/s10459-009-9162-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10459-009-9162-6

Keywords

Navigation