Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Practice makes proficient: teaching undergraduate students to understand published research

Abstract

Scientific knowledge, including the critical evaluation and comprehension of empirical articles, is a key skill valued by most undergraduate institutions for students within the sciences. Students often find it difficult to not only summarize empirical journal articles, but moreover to successfully grasp the quality and rigor of investigation behind the source. In this paper, we use instructional scaffolds (reading worksheets, RWs, with tutorials) to aid students in being able to comprehend, and ultimately transfer, the skills necessary in critically evaluating primary sources of research. We assess students’ learning of these skills on a multiple-choice assessment of Journal Article Comprehension (JAC). Students in experimental classes, who received instructional scaffolds, improved on the JAC post-test compared with students in control classes. This result shows that students are acquiring fundamental research skills such as understanding the components of research articles. We also showed that improvement on the JAC post-test for the experimental class extended to a written summary test. This result suggests that students in the experimental group are developing discipline-specific science process skills that allow them to apply JAC skills to a near-transfer task of writing a summary.

This is a preview of subscription content, log in to check access.

References

  1. Lippman, J. P., Kershaw, T. C., Pellegrino, J. W., & Ohlsson, S. (2008). Beyond standard lectures: Supporting the development of critical thinking in cognitive psychology courses. In D. S. Dunn, J. S. Halonen & R. A. Smith (Eds.), Teaching critical thinking in psychology: A handbook of best practices (pp. 183–198). Boston: Blackwell Publishing.

  2. Kershaw, T. C., Lippman, J. P., & Kolev, L. N. (in preparation). Learning to critique published psychological research.

  3. Anisfeld, M. (1987). A course to develop competence in critical reading of empirical research in psychology. Teaching of Psychology, 14, 224–227. https://doi.org/10.1207/s15328023top1404_8.

  4. Bachiochi, P., Everton, W., Evans, M., Fugere, M., Escoto, C., Letterman, M., et al. (2011). Using empirical article analysis to assess research methods courses. Teaching of Psychology, 38, 5–9. https://doi.org/10.1177/0098628310387787.

  5. Baram-Tsabari, A., & Yarden, A. (2005). Text genre as a factor in the formation of scientific literacy. Journal of Research in Science Teaching, 42, 403–428. https://doi.org/10.1002/tea.20063.

  6. Bednall, T. C., & Kehoe, E. J. (2011). Effects of self-regulatory instructional aids on self-directed study. Instructional Science, 39, 205–226. https://doi.org/10.1007/s11251-009-9125-6.

  7. Beilock, S. L., & Carr, T. H. (2005). When high-powered people fail: Working memory and “choking under pressure” in math. Psychological Science, 16, 101–105. https://doi.org/10.1111/j.0956-7976.2005.00789.x.

  8. Bretzing, B. H., & Kulhavy, R. W. (1979). Notetaking and depth of processing. Contemporary Educational Psychology, 4, 145–153. https://doi.org/10.1016/0361-476X(79)90069-9.

  9. Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132, 354–380. https://doi.org/10.1037/0033-2909.132.3.354.

  10. Chi, M. T. H. (1997). Quantifying qualitative analyses of verbal data: A practical guide. The Journal of the Learning Sciences, 6, 271–315. https://doi.org/10.1207/s15327809jls0603_1.

  11. Chinn, C. A., & Brewer, W. F. (2001). Models of data: A theory of how people evaluate data. Cognition and Instruction, 19, 323–393. https://doi.org/10.1207/S1532690XCI1903_3.

  12. Chinn, C. A., & Malhotra, B. A. (2002). Epistemologically authentic inquiry in schools: A theoretical framework for evaluating inquiry tasks. Science Education, 86, 175–218. https://doi.org/10.1002/sce.10001.

  13. Christopher, A. N., & Walter, M. I. (2006). An assignment to help students learn to navigate primary sources of information. Teaching of Psychology, 33, 42–45.

  14. Coil, D., Wenderoth, M. P., Cunningham, M., & Dirks, C. (2010). Teaching the process of science: Faculty perceptions and an effective methodology. CBE—Life Sciences Education, 9, 524–535. https://doi.org/10.1187/cbe.10-01-0005.

  15. Cooper, J. M., & Strayer, D. L. (2008). Effects of simulator practice and real-world experience on cell-phone-related driver distraction. Human Factors, 50, 893–902.

  16. Dasgupta, A. P., Anderson, T. R., & Pelaez, N. (2014). Development and validation of a rubric for diagnosing students’ experimental design knowledge and difficulties. CBE—Life Sciences Education, 13, 265–284. https://doi.org/10.1187/cbe.13-09-0192.

  17. Day, J. D. (1983). Teaching summarization skills: Influences of student ability level and strategy difficulty. Cognition and Instruction, 3, 193–210. https://doi.org/10.1207/s1532690xci0303_3.

  18. Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14, 4–58. https://doi.org/10.1177/1529100612453266.

  19. Dyer, J. W., Riley, J., & Yekovich, F. R. (1979). An analysis of three study skills: Notetaking, summarizing, and rereading. Journal of Educational Research, 73, 3–7. https://doi.org/10.1080/00220671.1979.10885194.

  20. Ericsson, K. A., & Charness, N. (1994). Expert performance: Its structure and acquisition. American Psychologist, 49, 725–747. https://doi.org/10.1037/0003-066X.49.8.725.

  21. Fleiss, J. L. (1981). Statistical methods for rates and proportions. New York: Wiley.

  22. Gadgil, S., & Nokes-Malach, T. J. (2012). Collaborative facilitation through error-detection: A classroom experiment. Applied Cognitive Psychology, 26, 410–420. https://doi.org/10.1002/acp.1843.

  23. Gillen, C. M. (2006). Criticism and interpretation: Teaching the persuasive aspects of research articles. CBE—Life Sciences Education, 5, 34–38. https://doi.org/10.1187/cbe.05-08-0101.

  24. Goldman, S. R., & Bisanz, G. (2002). Toward a functional analysis of scientific genres: Implications for understanding and learning processes. In J. Otero, J. A. Leon, & A. C. Graesser (Eds.), The psychology of science text comprehension (pp. 19–50). Mahwah, NJ: Lawrence Erlbaum Associates.

  25. Goldman, S. R., & Pellegrino, J. W. (2015). Research on learning and instruction: Implications for curriculum, instruction, and assessment. Policy Insights from the Behavioral and Brain Sciences, 2, 33–41. https://doi.org/10.1177/2372732215601866.

  26. Gottfried, G. M., Johnson, K. E., & Vosmik, J. R. (2009). Assessing student learning: A collection of evaluation tools. Office of Teaching Resources in Psychology. Retrieved from http://teachpsych.org/resources/Documents/otrp/resources/gottfried09.pdf.

  27. Gwet, K. L. (2014). Handbook of inter-rater reliability (4th ed.). Gaithersburg, MD: Advanced Analytics LLC.

  28. Hmelo-Silver, C., Duncan, R. G., & Chinn, C. A. (2007). Scaffolding and achievement in problem-based and inquiry learning: A response to Kirschner, Sweller, and Clark (2006). Educational Psychologist, 42, 99–107. https://doi.org/10.1080/00461520701263368.

  29. Karcher, S. J. (2000). Student reviews of scientific literature: Opportunities to improve students’ scientific literacy and writing skills. In S. J. Karcher (Ed.), Tested studies for laboratory teaching. Proceedings of the 22nd Workshop/Conference of the Association for Biology Laboratory Education (pp. 484–487).

  30. Landis, J. R., & Koch, G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174.

  31. Lee, L., Frederick, S., & Ariely, D. (2006). Try it, you’ll like it: The influence of expectation, consumption, and revelation on preferences for beer. Psychological Science, 17, 1054–1058. https://doi.org/10.1111/j.1467-9280.2006.01829.x.

  32. Levine, E. (2001). Reading your way to scientific literacy. Journal of College Science Teaching, 31, 122–125.

  33. Locke, L. F., Silverman, S. J., & Spirduso, W. W. (1998). Reading and understanding research. London: Sage.

  34. Lorch, R. F., Jr., & Lorch, E. P. (1996). Effects of organizational signals on free recall of expository text. Journal of Educational Psychology, 88, 38–48. https://doi.org/10.1006/ceps.1996.0022.

  35. Macnamara, B. N., Hambrick, D. Z., & Oswald, F. L. (2014). Deliberate practice and performance in music, games, sports, education, and professions: A meta-analysis. Psychological Science, 25, 1608–1618. https://doi.org/10.1177/0956797614535810.

  36. Madigan, R., Johnson, S., & Linton, P. (1995). The language of psychology: APA style as epistemology. American Psychologist, 50, 428–436. https://doi.org/10.1037/0003-066X.50.6.428.

  37. Morris, B. J., Croker, S., Masnick, A. M., & Zimmerman, C. (2012). The emergence of scientific reasoning. In H. Kloos, B. J. Morris, & J. L. Amaral (Eds.), Current topics in children’s learning and cognition (pp. 61–82). Rijeka, Croatia: InTech.

  38. Newell, G. E., Beach, R., Smith, J., & VanDerHeide, J. (2011). Teaching and learning argumentative reading and writing: A review of research. Reading Research Quarterly, 46, 273–304. https://doi.org/10.1598/RRQ.46.3.4.

  39. Oldenburg, C. M. (2016). Use of primary source readings in psychology courses at liberal arts colleges. Teaching of Psychology, 32, 25–29.

  40. Reiser, B. J. (2004). Scaffolding complex learning: The mechanisms of structuring and problematizing student work. The Journal of the Learning Sciences, 13, 273–304. https://doi.org/10.1207/s15327809jls1303_2.

  41. Robertson, K. (2012). A journal club workshop that teaches undergraduates a systematic method for reading, interpreting, and presenting primary literature. Journal of College Science Teaching, 41, 25–31.

  42. Russell, J. S., Martin, L., Curtin, D., Penhale, S., & Trueblood, N. A. (2004). Non-science majors gain valuable insight studying clinical trials literature: An evidence-based medicine library assignment. Advances in Physiological Education, 28, 188–194.

  43. Sego, S. A., & Stuart, A. E. (2016). Learning to read empirical articles in general psychology. Teaching of Psychology, 43, 38–42. https://doi.org/10.1177/0098628315620875.

  44. Shapiro, A. M. (2004). How including prior knowledge as a subject variable may change outcomes of learning research. American Educational Research Journal, 41, 159–189. https://doi.org/10.3102/00028312041001159.

  45. Son, L. K., & Simon, D. A. (2012). Distributed learning: Data, metacognition, and educational implications. Educational Psychology Review, 24, 379–399. https://doi.org/10.1007/s10648-012-9206-y.

  46. Suter, W. N., & Frank, P. (1986). Using scholarly journals in undergraduate experimental methodology courses. Teaching of Psychology, 13, 219–221.

  47. Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics (6th ed.). Boston: Pearson.

  48. Taylor, K. K. (1983). Can college students summarize? Journal of Reading, 26, 524–528.

  49. van Gelder, T., Bissett, M., & Cumming, G. (2004). Cultivating expertise in informal reasoning. Canadian Journal of Experimental Psychology, 58, 142–152. https://doi.org/10.1037/h0085794.

  50. Van Lacum, E. B., Ossevoort, M. A., & Goedhart, M. J. (2014). A teaching strategy with a focus on argumentation to improve undergraduate students’ ability to read research articles. CBE—Life Sciences Education, 13, 253–264. https://doi.org/10.1187/cbe.13-06-0110.

  51. Yarden, A. (2009). Reading scientific texts: Adapting primary literature for promoting scientific literacy. Research in Science Education, 39, 307–311. https://doi.org/10.1007/s11165-009-9124-2.

  52. Yarden, A., Brill, G., & Falk, H. (2001). Primary literature as a basis for a high-school biology curriculum. Journal of Biological Education, 35, 190–195.

  53. Zieffler, A. S., & Garfield, J. B. (2009). Modeling the growth of students’ covariational reasoning during an introductory statistics course. Statistics Education Research Journal, 8, 7–31.

Download references

Acknowledgements

The authors thank Amy Shapiro and Scott Hinze for access to their cognitive psychology courses. We thank James Bradley for his assistance in creating the summary coding scheme. Judy Sims-Knight gave invaluable advice on several statistical issues, and Susan Goldman and Micki Chi provided important suggestions for best practices in establishing inter-rater reliability. We also thank several anonymous reviewers for their feedback. Preliminary results from the pilot studies were presented at the 15th Annual European Association for Research on Learning and Instruction conference (EARLI 2013, Munich, Germany) and the 4th Biennial Conference of the International Society for the Psychology of Science and Technology (ISPST, 2012, Pittsburgh, PA).

Author information

Correspondence to Trina C. Kershaw.

Appendices

Appendix 1: Method, analysis, and results details, pilot studies

Pilot study 1

The participants of the first pilot study included 78 students from an undergraduate cognitive psychology course in the experimental group who completed RWs, and 78 students from a different undergraduate cognitive psychology course in the control group who only used a textbook as their instructional materials. Students completed one form of the JAC as a pre-test at the beginning of the semester and one form as a post-test as the end of the semester with order of forms counterbalanced. After completion of the JAC pre-test, but before completion of the first RW assignment, the experimental group received a 1-h tutorial covering strategies for locating information within journal articles needed to complete a RW assignment. This tutorial used a research article that was not part of the JAC assessment. The control group did not receive a tutorial.

The first pilot study showed an unexpected effect of order, in that there was a larger increase between pre- and post-test when students had the Beilock and Carr (2005) JAC for the post-test. To account for these order effects, we included order as a covariate in a factorial ANCOVA with a repeated measures factor of time (pre-test vs. post-test) and a between-subjects factor of group (experimental vs. control). All main effects were significant as well as the interactions, as shown in the table below.

Source SS MS F p \(\eta_{\text{p}}^{2}\)
Time .46 .46 27.42 .0001 .15
Group .61 .61 28.33 .0001 .16
Order .24 .24 11.28 .001 .07
Time × group .13 .13 7.85 .01 .05
Time × order .41 .41 24.55 .0001 .14
Within cells 2.56 .02    
Between cells 3.27 .02    
  1. df = 1, 153

To further explore the time x group interaction, paired-samples t-tests were conducted within the experimental and control groups. The experimental group showed significant improvement between their pre-test (M = .73, SD= .14) and post-test proportion scores (M = .79, SD= .14), t (77) = − 2.53, p = .01, d = .32, while the control group’s scores did not change significantly between pre-test (M = .68, SD= .14) and post-test (M = .68, SD= .16), t (77) = .24, p = .81, d = .05.

Pilot study 2

In the second pilot study there was no control group. Participants included 97 undergraduate psychology students in a cognitive psychology course who completed RWs. Similar to the experimental group of the first pilot study, all participants received a tutorial after the pretest, but unlike the first pilot study, the article used for the tutorial was either Lee et al. (2006) or Beilock and Carr (2005).

Performance on the JAC forms was not correlated at post-test (r = .15, p = .15), so each form was analyzed separately. Using a repeated measures ANOVA on the Beilock and Carr JAC, we found that all students improved from pre-test (M = .73, SD = .14) to post-test (M = .84, SD= .11) no matter which article they reviewed during the tutorial, F (1, 95) = 47.73, p = .0001, \(\upeta_{\text{p}}^{2}\) = .32. There was no main effect of tutorial type, and no interaction, as shown in the table below.

Source SS MS F p \(\upeta_{\text{p}}^{2}\)
Time .65 .65 47.73 .0001 .32
Review type .01 .01 .69 .41 .01
Time × review type .03 .03 2.07 .15 .02
Within cells 1.29 .01    
Between cells 1.77 .02    
  1. df = 1, 95

For the Lee et al. JAC, we found no main effect of time and no main effect of review content (see following table). There was, however, a significant interaction between time and review content, F (1, 95) = 4.06, p = .05, \(\upeta_{\text{p}}^{2}\) = .04. Specifically, if students received the Lee et al. tutorial, their proportion scores did not significantly increase between pre-test (M = .72, SD= .13) and post-test (M = .75, SD= .13), t (51) = − 1.26, p = .21, d = .16. If students received the Beilock and Carr tutorial, their proportion scores decreased between pre-test (M = .71, SD = .13) and post-test (M = .68, SD = .12), although not significantly, t (44) = 1.76, p = .09, d = .23.

Source SS MS F p \(\eta_{\text{p}}^{2}\)
Time .00 .00 .002 .97 .00
Review type .07 .07 3.73 .06 .04
Time × review type .05 .05 4.06 .05 .05
Within cells 1.26 .01    
Between cells 1.79 .02    
  1. df = 1, 95

Appendix 2: Beilock and Carr (2005) JAC questions and scoring rubric

  1. 1.

    Which of these is the best statement of the purpose of this research?

    1. a.

      The authors’ purpose was to examine why some people choke under pressure but others don’t. isn’t as specific as answer C partial credit

    2. b.

      The authors’ purpose was to explain how math anxiety negatively affects performance on math tests. although the authors discuss math anxiety in their intro, this is not the purpose of the study no credit

    3. c.

      The authors’ purpose was to explain how individual differences in WM capacity affect susceptibility to choking under pressure in mathematical problem solving. full credit

    4. d.

      The authors’ purpose was to examine how individuals with high WM capacity excel in testing situations. this is the purpose of many WM articles, but not this one no credit

  2. 2.

    Which of the following best describes the participants and what they did?

    1. a.

      93 undergraduate students completed the operation span and reading span tests and then were split into low and high working memory groups. this is true but not as good of an answer as B partial credit

    2. b.

      93 undergraduate students were screened for their WM capacity and then performed low- and high-demand modular arithmetic problems under low- and high-pressure conditions. full credit

    3. c.

      48 undergraduate students completed 93 modular arithmetic problems. this is wrong no credit

    4. d.

      93 undergraduate students completed a series of math problems that were adopted from the SAT. this isn’t true at all no credit

  3. 3.

    Which of the following is/are the independent variable(s)?

    1. a.

      Level of math anxiety not measured at all no credit

    2. b.

      Accuracy on modular arithmetic problems and RT on correct problems these are the DVs no credit

    3. c.

      Whether the modular arithmetic problem had a large number or required a borrow operation or not this is the definition of problem demand partial credit

    4. d.

      Problem demand (low vs. high), pressure (low vs. high), and working memory capacity (low vs. high) full credit

  4. 4.

    Which of the following is/are the dependent variable(s)?

    1. a.

      Accuracy on modular arithmetic problems and RT on correct problems full credit

    2. b.

      Score on the operation span and reading span tests these scores are used to create an IV no credit

    3. c.

      Performance on the modular arithmetic problems students need to be more specific partial credit

    4. d.

      Problem demand (low vs. high), pressure (low vs. high), and working memory capacity (low vs. high) these are the IVs no credit

  5. 5.

    Which of the following best summarizes the important results?

    1. a.

      Low WM subjects were slower than high WM subjects to solve the modular arithmetic problems. while the LWMs were slower on high-demand problems, this is not the best summary partial credit

    2. b.

      High WM subjects showed higher accuracy and were faster in the high-pressure condition. prediction that was not supported no credit

    3. c.

      High WM subjects showed lower accuracy on high-demand problems in the high-pressure condition (low WMs not affected by pressure) and all subjects were slower on high-demand problems and under high-pressure. full credit

    4. d.

      High WM subjects showed higher accuracy on high-demand problems in the high-pressure condition (low WMs not affected by pressure) and all subjects were faster on high-demand problems and under high-pressure. this is wrong, just trying to have an answer that is the same length as the correct answer no credit

  6. 6.

    Which of the following is/are valid criticism(s) of the research?

    1. a.

      There is demographic information missing about the participants this is an ok, but not great criticism—the authors do skip a lot of demo. info. partial credit

    2. b.

      Math test and high-pressure condition are unlike what one would experience in the real world both of these are valid criticisms full credit

    3. c.

      Subjects’ baseline math ability was not tested. this is a nuisance variable and also no Ss would have experience with the modular arithmetic task partial credit

    4. d.

      Only undergraduates were tested. although WM might change with age, this isn’t really a valid criticism for the current study no credit

  7. 7.

    Which of the following statements is most likely to be true, based on this research?

    1. a.

      People only do well on math problems when they are extremely anxious. wrong no credit

    2. b.

      People choke under pressure because they are worried about what others think about them. this could be true for the HWM group but not as good as C partial credit

    3. c.

      People with high WM capacity are more likely than people with low WM capacity to choke under pressure in high-demand situations. full credit

    4. d.

      People with high WM capacity outperform people with low WM capacity on math problems. this is true only under low pressure partial credit

Appendix 3: Sample coded summaries

Subject 1048, experimental group, pre-test

The most important points of study in this article are those claiming that the presence of anxiety interferes with a person’s ability to think as diligently as it normally would under normal circumstances. The article is saying that although a person may have the ability to solve math problems and perform well, if the working memory is interrupted by the negative emotion of anxiety, its ability to work at its best is decreased. When anxiety poses a threat to the working memory, it cannot put all its efforts into thinking and solving a problem, it needs to also put effort into figuring out a way to deal with the anxious feeling that is clouding the person’s mind. The article says that for people high in working memory, the presence of anxiety poses a threat because so much of their thought processes is used for solving the specific problem and not used for coping with anxious nerves in order to focus on the specific problem.

Subject 1048, experimental group, post-test

The goal of this study was to see if individual differences in WMC may have something to do with a person choking under pressure. 93 undergraduate students were divided into two groups, LWM and HWM, each person was tested on MA problems and used a computer to do so. Each person was given a partner and was told that they could win an award of $5, but that their partner had already completed the task and that it was a team effort. Their scores were dependent on time and accuracy. Results revealed that HWM was not affected by pressure on low-demand problems, however HWM’s performance on high-demand started to decline under pressure. In addition, all groups were slower in the low-pressure test than in the high-pressure test. The implications from this study are that HWM does not have an advantage over LWM during high-pressure demand situations.

Subject 5002, control group, pre-test

The research study performed by Beilock and Carr was about whether or not pressure and anxiety during a situation like answering arithmetic problems would affect High Working Memory (HWM) individuals and Low Working Memory (LWM) individuals. It was hypothesized that participants with low working memory are more susceptible to crack under pressure than participants with high working memory due to limited capacity to obtain information and figure out problem solutions. It was also hypothesized that participants with HWM are more susceptible to failure under pressure while answering arithmetic problems than LWMs because the pressure of the situation may deny them the resources/working memory that they are used to relying on while not in an anxiety provoked situation. The important parts to this study include the findings and results on how participants with HWM did in fact perform worse under pressure during the tests than they would without having pressure. On the other hand, participants with LWM performed equally worse on both the tests with or without added pressure to the situation. So for the LWM group it did not matter if there was added pressure in the situation. Under normal conditions HWM performs better than the LWM group as well because they have high levels of attentional capacities. However, when the attentional capacity was affected, from the pressure induced situation, the HWS advantage disappears. The most important finding is basically that those who have the highest capacity for success or HWM are the individuals who are more susceptible to failing while under pressure.

Subject 5002, control group, post-test

In this study, researchers are interested in whether individuals who rely more on their working memory are influenced by performance pressure while solving mathematics problems than individuals low in working memory. The findings suggest that there was no significant difference in individuals with low capacity working memory and being influenced by performance pressure. The findings further suggest that there was a significant relationship between individuals with high capacity working memory and performance pressure.

Scores received for assessment areas, sample coded summaries

Assessment area Summary
Subject 1048 (experimental) Subject 5002 (control)
Pre Post Pre Post
Overall accuracy 1.5 4.5 3 2
Hypothesis/goals .5 .5 1 1
Sample 0 1 0 0
Procedure 0 .5 0 0
IVs 0 .75 .5 .5
DVs 0 .75 0 0
Results 0 1 .5 .5
Interpretation 1 0 1 0

Appendix 4

Mean accuracy on each question of the JAC assessment at pre- and post-test by group

Question Experimental group Control group
Pre Post Pre Post
Purpose .89 (.27) .95 (.19) .79 (.35) .82 (.34)
Participants and procedure .84 (.25) .87 (.22) .84 (.24) .84 (.31)
Independent variables (IVs) .72 (.31) .84 (.22) .67 (.38) .60 (.43)
Dependent variables (DVs) .52 (.30) .73 (.28) .51 (.32) .45 (.35)
Results .89 (.31) .89 (.31) .79 (.42) .79 (.40)
Conclusions .95 (.17) .95 (.15) .89 (.25) .84 (.27)
Criticisms .46 (.21) .45 (.21) .46 (.21) .46 (.23)
Total JAC score .75 (.11) .81 (.10) .71 (.13) .68 (.15)
  1. Values are proportion scores represented in the form M (SD). Experimental group, n = 86. Control group, n = 28

Appendix 5

Mean accuracy on each item of the summary task at pre- and post-test by group

Item Experimental group Control group
Pre Post Pre Post
Hypothesis/goals .80 (.31) .70 (.38) .75 (.29) .79 (.35)
Sample .37 (.46) .68 (.41) .45 (.46) .46 (.47)
Procedure .52 (.29) .61 (.29) .41 (.27) .38 (.32)
Independent variables (IVs) .59 (.22) .65 (.22) .52 (.24) .54 (.26)
Dependent variables (DVs) .11 (.27) .30 (.39) .05 (.20) .08 (.24)
Results .35 (.31) .45 (.32) .29 (.29) .30 (.25)
Interpretation .17 (.29) .24 (.37) .21 (.37) .18 (.34)
Total summary task score .42 (.16) .52 (.18) .38 (.18) .39 (.16)
  1. Values are proportion scores represented in the form M (SD). Experimental group, n = 86. Control group, n = 28

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kershaw, T.C., Lippman, J.P. & Fugate, J.M.B. Practice makes proficient: teaching undergraduate students to understand published research. Instr Sci 46, 921–946 (2018). https://doi.org/10.1007/s11251-018-9456-2

Download citation

Keywords

  • Reading empirical articles
  • Instructional scaffolds
  • Assessment
  • Learning outcomes
  • Cognitive psychology