Semi-automated transcription and scoring of autobiographical memory narratives

Abstract

Autobiographical memory studies conducted with narrative methods are onerous, requiring significant resources in time and labor. We have created a semi-automated process that allows autobiographical transcribing and scoring methods to be streamlined. Our paper focuses on the Autobiographical Interview (AI; Levine, Svoboda, Hay, Winocur, & Moscovitch, Psychology and Aging, 17, 677–89, 2002), but this method can be adapted for other narrative protocols. Specifically, here we lay out a procedure that guides researchers through the four main phases of the autobiographical narrative pipeline: (1) data collection, (2) transcribing, (3) scoring, and (4) analysis. First, we provide recommendations for incorporating transcription software to augment human transcribing. We then introduce an electronic scoring procedure for tagging narratives for scoring that incorporates the traditional AI scoring method with basic keyboard shortcuts in Microsoft Word. Finally, we provide a Python script that can be used to automate counting of scored transcripts. This method accelerates the time it takes to conduct a narrative study and reduces the opportunity for error in narrative quantification. Available open access on GitHub (https://github.com/cMadan/scoreAI), our pipeline makes narrative methods more accessible for future research.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3

Notes

  1. 1.

    A similar approach is used in studies of autobiographical imagination, wherein participants are asked to narrate an event imagined in a specific context (e.g., “Imagine catching your grandchild getting into trouble twenty years from now”; e.g., Race, Keane, & Verfaellie, 2011; Addis et al., 2008), or in studies of counterfactual thinking, wherein participants are asked to think about what could have been (e.g., De Brigard et al., 2016).

  2. 2.

    A wide array of other interview structures have been employed to capture a participant’s narrative for analysis, such as the Autobiographical Memory Interview (Kopelman et al., 1989) or the TEMPau task (Piolino et al., 2003), and our method can be modified to these other structures as well. Moreover, although our focus is on narrative work in the context of specific events (i.e., situated in a specific time and place), our approach can be modified for studies examining broader autobiographical content, including life stories (e.g., Grilli, Wank, & Verfaellie, 2018), narrative meaning (McAdams & McLean, 2013) or self-referential processing (Kurczek et al., 2015; Verfaellie, Wank, Reid, Race, & Keane, 2019; also see Adler et al., 2017, for a discussion of other approaches).

  3. 3.

    The AI method can also be used with other event selection prescriptions, such as the use of single word cues to elicit specific events (see Addis et al., 2008; also see Crovitz & Schiffman, 1974). We note that more specific cues (e.g., “granddaughter’s recital”) have been shown to elicit more specific and detailed memories than cue words (e.g., “lemon”), particularly in patient populations. The former may afford greater organizational scaffold to augment memory search (Kwan, Kurczek, & Rosenbaum, 2016). Other work shows that the emotional nature of the retrieval cues can also impact the nature of recall (Sheldon & Donahue, 2017).

  4. 4.

    For free and open software alternatives, our protocol can be used in conjunction with Google Docs or OpenOffice.

References

  1. Addis, D. R., Musicaro, R., Pan, L., & Schacter, D. L. (2010). Episodic simulation of past and future events in older adults: Evidence from an experimental recombination task. Psychology and Aging, 25(2), 369.

    PubMed  PubMed Central  Google Scholar 

  2. Addis, D. R., Wong, A. T., & Schacter, D. L. (2008). Age-related changes in the episodic simulation of future events. Psychological Science, 19(1), 33–41.

    PubMed  Google Scholar 

  3. Adler, J. M., Dunlop, W. L., Fivush, R., Lilgendahl, J. P., Lodi-Smith, J., Mcadams, D. P., ... & Syed, M. (2017). Research methods for studying narrative identity: A primer. Social Psychological and Personality Science, 8(5), 519–527.

  4. Banaji, M. R., & Crowder, R. G. (1989). The bankruptcy of everyday memory. American Psychologist, 44(9), 1185.

    Google Scholar 

  5. Brown, A. D., Addis, D. R., Romano, T. A., Marmar, C. R., Bryant, R. A., Hirst, W., & Schacter, D. L. (2014). Episodic and semantic components of autobiographical memories and imagined future events in post-traumatic stress disorder. Memory, 22(6), 595–604.

    PubMed  Google Scholar 

  6. Canny, S. (2019). Python-Docx Python Module. https://www.pypi.org/project/python-docx/

  7. Conway, M. A., & Bekerian, D. A. (1987). Organization in autobiographical memory. Memory & Cognition, 15(2), 119–132.

    Google Scholar 

  8. Conway, M. A., & Rubin, D. C. (1993). The structure of autobiographical memory. In A. F. Collins, M. A. Conway, P. E. Morris (Eds.), Theories of memory (pp. 261-88). Hillsdale, New Jersey: Erlbaum.

    Google Scholar 

  9. Crovitz, H. F., & Schiffman, H. (1974). Frequency of episodic memories as a function of their age. Bulletin of the Psychonomic Society, 4(5), 517–518.

    Google Scholar 

  10. Dalgleish, T., Williams, J. M. G., Golden, A. M. J., Perkins, N., Barrett, L. F., Barnard, P. J., … Watkins, E. (2007). Reduced specificity of autobiographical memory and depression: The role of executive control. Journal of Experimental Psychology: General, 136(1), 23.

  11. De Brigard, F., Giovanello, K. S., Stewart, G. W., Lockrow, A. W., O’Brien, M. M., & Spreng, R. N. (2016). Characterizing the subjective experience of episodic past, future, and counterfactual thinking in healthy younger and older adults. The Quarterly Journal of Experimental Psychology, 69(12), 2358–2375.

    PubMed  Google Scholar 

  12. Devitt, A. L., Addis, D. R., & Schacter, D. L. (2017). Episodic and semantic content of memory and imagination: A multilevel analysis. Memory & Cognition, 45(7), 1078–1094.

    Google Scholar 

  13. Diamond, N. B., Abdi, H., Levine, B. (2020). Different patterns of recollection for matched Real-World and Laboratory-Based Episodes in Younger and Older Adults. Cognition 202, 104309.

  14. Dragon Naturally Speaking Voice Recognition Software (Version 15) [Computer Software]. (2016). Burlington, MA: Nuance Communications.

  15. Ebbinghaus, H. (1885). Memory: A contribution to experimental psychology. New York: Dover Publications.

    Google Scholar 

  16. Express Scribe Transcription Software (Version 8) [Computer Software]. (2019). Greenwood Village, CO: NCH Software.

  17. Gaesser, B., Sacchetti, D. C., Addis, D. R., & Schacter, D. L. (2011). Characterizing age-related changes in remembering the past and imagining the future. Psychology and Aging, 26(1), 80.

    PubMed  PubMed Central  Google Scholar 

  18. Gilboa, A. (2004). Autobiographical and episodic memory—One and the same?: Evidence from prefrontal activation in neuroimaging studies. Neuropsychologia, 42(10), 1336–1349.

    PubMed  Google Scholar 

  19. Grilli, M.D., Wank, A.A., & Verfaellie, M. (2018). The life stories of adults with amnesia: Insights into the contribution of the medial temporal lobes to the organization of autobiographical memory. Neuropsychologia, 110, 84–91.

    PubMed  Google Scholar 

  20. Hitchcock, C., Rodrigues, E., Rees, C., Gormley, S., Dritschel, B., & Dalgleish, T. (2019). Misremembrance of things past: Depression is associated with difficulties in the recollection of both specific and categoric autobiographical memories. Clinical Psychological Science, 7(4), 693–700.

    PubMed  PubMed Central  Google Scholar 

  21. Irish, M., Hornberger, M., Lah, S., Miller, L., Pengas, G., Nestor, P. J., … Piguet, O. (2011). Profiles of recent autobiographical memory retrieval in semantic dementia, behavioural-variant frontotemporal dementia, and Alzheimer’s disease. Neuropsychologia, 49(9), 2694–2702. https://doi.org/10.1016/J.Neuropsychologia.2011.05.017

  22. Ison, N. L. (2009). Having their say: Email interviews for research data collection with people who have verbal communication impairment. International Journal of Social Research Methodology, 12(2), 161–172.

    Google Scholar 

  23. James, W. (1890). Attention. The Principles of Psychology, 1, 402-458.

    Google Scholar 

  24. Jefferson, G. (1984). Transcript notation. In Atkinson, J. M., Heritage, J., & Oatley, K. (Eds.), Structures of social action: Studies in conversation analysis (pp. ix-xvi). Cambridge: Cambridge University Press.

    Google Scholar 

  25. Kopelman, M. D., Wilson, B. A., & Baddeley, A. D. (1989). The autobiographical memory interview: A new assessment of autobiographical and personal semantic memory in amnesic patients. Journal of Clinical and Experimental Neuropsychology, 11(5), 724–744.

    PubMed  Google Scholar 

  26. Kurczek, J., Wechsler, E., Ahuja, S., Jensen, U., Cohen, N. J., Tranel, D., & Duff, M. (2015). Differential contributions of hippocampus and medial prefrontal cortex to self-projection and self-referential processing. Neuropsychologia, 73, 116–126. https://doi.org/10.1016/J.Neuropsychologia.2015.05.002

    PubMed  PubMed Central  Article  Google Scholar 

  27. Kwan, D., Kurczek, J., & Rosenbaum, R. S. (2016). Specific, personally meaningful cues can benefit episodic prospection in medial temporal lobe amnesia. British Journal of Clinical Psychology, 55(2), 137–153.

    PubMed  Google Scholar 

  28. LePort, A. K., Stark, S. M., McGaugh, J. L., & Stark, C. E. (2017). A cognitive assessment of highly superior autobiographical memory. Memory, 25(2), 276–288.

  29. Levine, B., Svoboda, E., Hay, J. F., Winocur, G., & Moscovitch, M. (2002). Aging and autobiographical memory: Dissociating episodic from semantic retrieval. Psychology and Aging, 17, 677–89. https://doi.org/10.1037//0882-7974.17.4.677

  30. Madan, C. R. (2020). scoreAI Python Code. https://github.com/cMadan/scoreAI

  31. Matheson, J. L. (2007). The voice transcription technique: Use of voice recognition software to transcribe digital interview data in qualitative research. Qualitative Report, 12(4), 547–560.

    Google Scholar 

  32. McAdams, D. P., & McLean, K. C. (2013). Narrative identity. Current Directions in Psychological Science, 22(3), 233–238.

  33. McDermott, K. B., Szpunar, K. K., & Christ, S. E. (2009). Laboratory-based and autobiographical retrieval tasks differ substantially in their neural substrates. Neuropsychologia 47, 2290–2298.

  34. Miloyan, B., Mcfarlane, K., & Echeverría, A. V. (2019). The adapted autobiographical interview: A systematic review and proposal for conduct and reporting. Behavioural Brain Research 370:111881

    PubMed  Google Scholar 

  35. Nadel, L., Samsonovich, A., Ryan, L., & Moscovitch, M. (2000). Multiple trace theory of human memory: Computational, neuroimaging, and neuropsychological results. Hippocampus, 10(4), 352–368. https://doi.org/10.1002/1098-1063(2000)10:4<352::AID-HIPO2>3.0.CO;2-D

    PubMed  Article  Google Scholar 

  36. Neisser, U. (1978). Memory: What are the important questions?. In M. M. Gruneberg, P. E. Morris, & R. N. Sykes (Eds.), Practical aspects of memory (pp. 3–24). London: Academic Press.

    Google Scholar 

  37. Neisser, U. (Ed.) (1982). Memory observed: Remembering in natural contexts. San Francisco: Freeman.

    Google Scholar 

  38. NVivo (Version 12) [Computer Software]. (2018). Tehachapi, CA: QSR International.

  39. Palombo, D. J., Alain, C., Söderlund, H., Khuu, W., & Levine, B. (2015). Severely Deficient Autobiographical Memory (SDAM) in healthy adults: A new mnemonic syndrome. Neuropsychologia, 72, 105–118.

    PubMed  Google Scholar 

  40. Piolino, P., Desgranges, B., Belliard, S., Matuszewski, V., Lalevée, C., De La Sayette, V., & Eustache, F. (2003). Autobiographical memory and autonoetic consciousness: Triple dissociation in neurodegenerative diseases. Brain, 126(10), 2203–2219.

    PubMed  Google Scholar 

  41. Python [Computer Software]. (2020). Retrieved from https://www.python.org

  42. Race, E., Keane, M. M., & Verfaellie, M. (2011). Medial temporal lobe damage causes deficits in episodic memory and episodic future thinking not attributable to deficits in narrative construction. Journal of Neuroscience, 31(28), 10262–10269.

    PubMed  Google Scholar 

  43. Reed, J. M., & Squire, L. R. (1998). Retrograde amnesia for facts and events: Findings from four new cases. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 18(10), 3943–3954. https://doi.org/10.1523/JNEUROSCI.18-10-03943.1998

    Article  Google Scholar 

  44. Renoult, L., Armson, M. J., Diamond, N., Fan, C., Jeyakumar, N., Levesque, L., … St. Jacques, P. L. (2020). Classification of general and personal semantic details in the autobiographical interview. Neuropsychologia, 107501.

  45. Rubin, D. C., Wetzler, S. E., and Nebes, R. D. (1986). Autobiographical memory across the adult lifespan. In DC Rubin (Ed.), Autobiographical memory (pp. 202–221). Cambridge: Cambridge University Press

    Google Scholar 

  46. Sheldon, S., Diamond, N., Armson, M. J., Palombo, D. J., Selarka, D., Romero, K., … Levine, B. (2018). Autobiographical memory: Assessment and neuroimaging in health and disease. The Stevens’ handbook of experimental psychology and cognitive neuroscience, fourth edition.

  47. Sheldon, S., & Donahue, J. (2017). More than a feeling: Emotional cues impact the access and experience of autobiographical memories. Memory & Cognition, 45(5), 731–744. https://doi.org/10.3758/S13421-017-0691-6

    Article  Google Scholar 

  48. Söderlund, H., Moscovitch, M., Kumar, N., Daskalakis, Z. J., Flint, A., Herrmann, N., & Levine, B. (2014). Autobiographical episodic memory in major depressive disorder. Journal of Abnormal Psychology, 123(1), 51–60.

    PubMed  Google Scholar 

  49. Strikwerda-Brown, C., Mothakunnel, A., Hodges, J. R., Piguet, O., & Irish, M. (2019). External details revisited – A new taxonomy for coding ‘non-episodic’ content during autobiographical memory retrieval. Journal of Neuropsychology, 13(3), 371–97. Doi:https://doi.org/10.1111/Jnp.12160

    PubMed  Article  Google Scholar 

  50. Syed, M., & Nelson, S. C. (2015). Guidelines for establishing reliability when coding narrative data. Emerging Adulthood, 3(6), 375–387.

    Google Scholar 

  51. Tulving, E. (1972). Episodic and semantic memory. Organization of Memory, 1, 381–403.

    Google Scholar 

  52. Verfaellie, M., Wank, A. A., Reid, A. G., Race, E., & Keane, M. M. (2019). Self-related processing and future thinking: Distinct contributions of ventromedial prefrontal cortex and the medial temporal lobes. Cortex; A Journal Devoted to the Study of The Nervous System and Behavior, 115, 159–171. https://doi.org/10.1016/J.Cortex.2019.01.028

    PubMed  PubMed Central  Article  Google Scholar 

  53. Wickner, C., Englert, C., Addis, D.R. (2015). Developing a tool for autobiographical interview scoring. Kiwicam Conference, Wellington, New Zealand. https://github.com/scientific-tool-set/scitos

  54. Williams, J. M. G., Barnhofer, T., Crane, C., Herman, D., Raes, F., Watkins, E., & Dalgleish, T. (2007). Autobiographical memory specificity and emotional disorder. Psychological Bulletin, 133(1), 122.

    PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

The authors thank Hallie Liu and Taylyn Jameson for assistance with scoring the sample memories. D.J.P. is supported by a Discovery Grant from NSERC and the John R. Evans Leaders Fund from the Canadian Foundation for Innovation. The authors thank Young Ji Tuen and two reviewers for helpful comments. The authors declare no conflicts of interest, including no financial conflicts of interest with regard to any software or commercial products cited in this paper.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Daniela J. Palombo.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

ESM 1

(DOCX 25 kb)

Appendices

Appendix 1: Transcription manual

The following is an example of our laboratory’s protocol for transcribing narratives. We use a Microsoft Word.docx file as a template for each participant (as shown below). This document is also available for download in Supplementary Materials (template.docx).

General transcribing instructions

  1. 1

    Open the transcript template and fill out the following information:

    1. a

      Participant ID

    2. b

      Transcriber (that’s you)

    3. c

      Experimenter

    4. d

      Date of Testing

  2. 2

    Open the participant’s raw Dragon outputted transcript

  3. 3

    Open the participant’s MP3 file with Express Scribe software

  4. 4

    Connect an Infinity pedal to the USB port (optional)

  5. 5

    Edit the raw Dragon transcript (following the guidelines listed below under “editing transcripts”)

  6. 6

    Insert the dialogue into the appropriate section of the template

    1. a

      It is possible that there will be some dialogue between the experimenter and participant that requires you to add in additional ‘Experimenter:’ or ‘Participant:’ lines to the template

  7. 7

    Go through the transcript and:

    1. a

      Bold everything said by the experimenter

    2. b

      Double-space everything said by the participant

  8. 8

    Spell-check the transcript

  9. 9

    The transcript is now ready for a second transcriber to perform a quality check and sign off on the cover sheet (when complete).

Fig. 4
figure4

Screenshot of the transcription template

Editing transcripts

  • Transcripts are verbatim. This means any ums, ahs, or stutters are documented. Dragon will not do this for you

  • When the audio is unclear, type (inaudible 00:00), where 00:00 is the time stamp for the words you could not hear

  • For consistency, it is recommended that you use the following spelling for filler/shortened words:

Hmm Um hmm Wanna Shoulda
Mm Y’know Lemme Coulda
Uh Yeah Tryna ‘em
Ah Yep Kinda ‘cause
Um Dunno Gotta Goin’
Uh huh Gonna Woulda Doin’
  • Hyphens are used when a word or sentence is not finished

    • Ex: We were – Well we didn’t want to go

  • Commas are used when words are repeated and around filler words

    • Ex: I, I, I was so tired

    • Ex: I, um, wondered what to do

  • Ellipses (“…”) are used when participants pause for an extended period of time. Be careful not to overuse this; it is only necessary for long pauses (e.g., more than 3 seconds)

  • Round brackets are used to mark noises that are not words

    • Ex: (sighs) or (laughs)

  • Square brackets are used to conceal identifiers. It is important that we do NOT include anything that could identify the participant in the transcript

    • Names:

    Include the names of public names such as celebrities or scholars

    Ex: I always loved poetry

    Do not include any names mentioned that have personal relationships with the participant

    Ex: [participant’s name] or [participant’s boyfriend]

    • Places:

    Include the names of locations that might offer important context

    Ex: I grew up in Shanghai

    Do not include the names of places that might identify the participant

    Ex: I went for a run in [name of park in Vancouver] because it’s so close to my place

  • Quotation marks are used when someone says what they or someone else said but not for things the speaker thought to themselves

    • Ex: She was like, “Don’t you think we should tell them?”

    • Ex: And in my head I was like, what are you talking about?

  • Numerals follow APA format

    • Spell numbers one to nine

    • Use numerals for numbers greater than 10

    • Use numerals for years and dates

    • Use numerals for time. Only include a.m./p.m. if the speaker says it

Appendix 2: Keyboard configuration and Python code for detail counts

Part 1: creating the keyboard

  1. A

    Creating AutoText

Microsoft Word allows for common segments to be saved as ‘AutoText’. Not only are characters saved in an AutoText, all formatting, including style and highlights, are saved as well. Create separate AutoTexts for each detail type you are scoring for.

  1. 1

    Open a new Word document

  2. 2

    Type out exactly what text/format you want to appear and highlight it

  3. 3

    Go to the Insert tab

  4. 4

    Select ‘Quick Parts’ drop down menu

  5. 5

    Select ‘Save selection to Quick Part Gallery’

  6. 6

    The name will automatically be filled in with the text you have selected

  7. 7

    Select ‘AutoText’ in the ‘Gallery’ tab

  8. 8

    Assign the AutoText to the appropriate category, in this case ‘AutoBio_Scoring’

  9. 9

    Click ‘OK’

  1. B.

    Creating Keyboard Shortcuts

To insert the AutoText efficiently into transcripts, you can use keyboard shortcuts. Assign a keyboard shortcut to each AutoText.

  1. 1

    Open Word Options by pressing Alt+F+T

  2. 2

    Select the ‘Customize Ribbon’ tab in the left hand menu

  3. 3

    Select the ‘Customize’ button

  4. 4

    In the ‘Press new shortcut key’ box enter the key you wish to use (e.g., Ctrl+F)

  5. 5

    Scroll through the ‘Categories’ list and select ‘Building Blocks’

  6. 6

    Scroll through the ‘Commands’ list and select the AutoText you created in step one

  7. 7

    Click the ‘Assign’ button to assign the shortcut

  8. 8

    Click ‘Close’

Once this process is complete, the newly created keyboard shortcuts can be used during scoring.

Part 2: running the Python code for counting details

The Python script for counting details in the Word documents can be downloaded from https://github.com/cMadan/scoreAI (current version is build 10). The script is comprised of five sections.

The first section requires the user to configure the code and should be adjusted on a case-by-case basis. The options to configure include specifying the directory that has the input Word documents, the folder to output the stacked data to, and the number of memories in each Word document. The specific filename of the Word documents does not matter, though the script will load the files in alphabetical order and assumes no other files are in this input directory. Each Word document is expected to have the number of memories configured and be formatted as specified in the template. For an example of a scored memory document, see example_scoring.docx in the Supplementary Materials.

The second section and onwards should not be modified unless changing the overall functionality of the script (e.g., using a different document template or changing the memory labels. The second section of the code loads several Python modules into the environment for the script to use in the processing of the documents. The only non-standard Python module that is required is python-docx, which can be installed using the pip program (see https://python-docx.readthedocs.io/en/latest/user/install.html).

The third section defines the memory scoring labels (e.g., Int_EV, Ext_SEM), looks up the list of files in the input directory, and includes additional ‘under the hood’ settings.

The fourth and fifth sections do most of the actual work. The fourth section defines several functions that will need to be used repeatedly, such as for extracting specific paragraphs of text from the document and counting the number of occurrences for each scoring label. The fifth section of code brings it all together, cycling through each document, first extracting the participant ID and episodic richness ratings. The code then goes through and finds the start of each memory within the document and then uses these to extract the related text and calculate the component memory scores. These scores are then converted into a single data record along with the participant ID and episodic richness values, such that each memory is its own row. This then continues until all of the documents are processed and iteratively merged together. The final section of code converts these records into a dataframe format and then outputs them as a CSV into the designated output folder, with the filename including the number of documents (i.e., number of participants) and current date. An example output file, corresponding to the example scored memory document, is provided as example_output.csv.

Figure 5 is an example of a filled out sheet, which is a formatted version of the output file from Python, displayed in Excel: The three rows represent the detail counts for the memories provided in the scoring example. Other variables (e.g., time period, condition) are shown for display purposes.

Fig. 5
figure5

Screenshot of a study “master sheet” for three memories for one participant. The rows represent the detail counts for the scored memories provided in the Supplementary Material 

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wardell, V., Esposito, C.L., Madan, C.R. et al. Semi-automated transcription and scoring of autobiographical memory narratives. Behav Res (2020). https://doi.org/10.3758/s13428-020-01437-w

Download citation

Keywords

  • Autobiographical interview
  • Autobiographical memory
  • Narratives
  • Scoring