A few years ago I was having a beer with a clinician colleague. We meet regularly over our collaborative research; meetings always occur late afternoon to accommodate a pint of beer. He flopped down in a chair and said:

Boy, did I have a rough day in the ED yesterday!

“How come?” said I.

H1N1 flu. I had 75 cases in an 8 hour shift!

“Too bad,” says I.

But in the middle, I picked up a Kawasaki disease.

How did you do it?

I dunno. It just didn’t feel right

My ears perked up. “It just didn’t feel right.” That’s weird. I’ve pondered for years about how docs deal with unusual or difficult cases. One view is that System 1—pattern recognition is OK for routine cases, but you need System 2—logical, analytical, effortful—to pull out the complex or unusual cases (Croskerry 2003). But they never describe how the doc knows which is which. Others have talked about “slowing down when you should” (Moulton et al. 2010)—again, a description suggesting that experienced physicians somehow know when things aren’t going well or fitting together. Kahneman says something similar in his bestseller, “Thinking Fast and Slow” (2011):

One way to block errors that originate in System 1 is simple in principle: recognize the signs that you are in a cognitive minefield, slow down, and ask for reinforcement from System 2. p. 417

So everyone agrees that when the problem is problematic, you should invoke careful, systematic analytical reasoning—System 2. But how do you know when the problem is problematic? Well, the anecdote above seems to be about the closest I’ve gotten to pinning down that insightful moment. Strangely, it sounds a lot more like System 1—intuitive, rapid, unconscious—than System 2.

Well, eventually that anecdote spun into a qualitative study with Meredith Vanstone as principal investigator that is just now getting into publication (Peters et al. 2017). She has identified many more of these “cognitive dissonance” moments, and gone on to create a taxonomy.

But that’s not what I want to talk about today. What is of immediate interest is what happened next. A couple of weeks later we met again. I had been telling almost everyone I met about this light bulb moment. When I reminded Jon, he said something like:

Well, it was obvious. When you examined the eyes, you could clearly see the inflammation of the arterioles on the retina. (Or something like that).

I then reminded him that is NOT the way he talked about it two weeks ago. And he agreed.

And this is the point where things get more interesting. Why did he change his introspective account 2 weeks later? And was he aware that he had drastically changed he description of the event from a “Eureka” kind of insight to almost a foregone conclusion given the data.

Fast forward about 3 years. Earlier this year a paper was submitted to AHSE, where the authors had first administered a standard psychological test of cultural bias, the “implicit attitudes test” to a group of faculty at Western University. They then held a 4 h workshop in which they “aimed to bring implicit bias regarding individuals with mental illness into conscious awareness for learners, and to foster critical reflection while enhancing conscious efforts to overcome bias.” (Sukhera et al. 2018).

The paper received strong reviews from the qualitative researchers who examined it, and was moving toward acceptance after a rewrite. Until I saw it. But at that point I began to have second thoughts. Putting on a psychologist hat, I asked myself whether people can really become aware of their unconscious attitudes. And then I began to explore a relatively huge literature in psychology dating back to the 1970s. The basic question was, “To what extent are people aware of their own unconscious thinking processes?”

Before I go on, this is not an arcane question of interest only to psychologists. If you look around, it’s easy to identify numerous situations where we assume that people can accurately recount past experiences they have had. Or if they cannot recall spontaneously, they can with appropriate guidance. That was an implicit assumption of the study we reviewed. It also is a foundational principle of psychoanalysis, which may be why it fell into disfavour among experimental psychologists a century ago. It never disappeared in psychiatry, however. Remember “repressed memory syndrome”, where disturbed patients, under appropriate prompting and exploration from trained analysts, are able to recall childhood experiences where they were abused by parents? Moving away from the clinical domain, eyewitness testimony depends on accurate unbiased recall of witnesses. Can we really believe that eyewitnesses can provide veridical accounts of incidents hors, days or decades ago?

In our own domain of education, we make this assumption regularly. When a clinician on rounds introspects about how he arrived at a diagnosis, we never think to question whether this is an accurate account. We encourage students to “reflect on” and “self-assess” their approaches or performances. Many quantitative surveys in domains like personality, learning style, “emotional intelligence,” or “levels of processing” ask respondents to report on activities or attitudes that they may never have consciously examined, based on the implicit assumption that they will have insight into their behaviour.

On the other hand, the extensive literature on cognitive biases, which is now invoked every time a diagnostic error arises, is based on the premise that we fall prey to these biases unknowingly and are powerless to remediate them. As Kahneman says:

What can be done about biases?… How can we improve judgments and decisions…? The short answer is that little can be achieved without a considerable investment of effort. … Thinking Fast and Slow, p. 416 (of 417)

However as a community, medicine pretty well disregards Kahneman’s admonition and accepts as axiomatic that if we teach people about cognitive biases, they will be able to recognize when they have made one, and errors will go away). Over a hundred biases have been identified (which means that someone somewhere has described something that may occur, with no particular evidence that it does occur). There have been many reported attempts to devise instructional strategies to make people aware of their biases so that they can correct their own errors. However, not only are these strategies consistently unsuccessful (Sherbino et al. 2011); one study (Zwaan et al. 2017) showed that ostensive experts in cognitive biases cannot agree on which is which.

As another example, much qualitative research relies on informant interviews, where the interviewee may be asked to recall an incident or activity and mentally reconstruct her actions.

Is it universally true that we are completely incapable of monitoring our thought processes? Of course not. There are some circumstances where introspection can be believed. For example, if I ask you what route you took when you went to the airport, or what you had for breakfast I have no reason to doubt your response. On the other hand, if I ask how often you encounter delays in driving to the airport, or whether you run red lights, there is more room for uncertainty.

Please note that many of the circumstances I’ve just described are derived more from psychology and quantitative research than sociology and qualitative research. Perhaps not surprisingly then, psychologists have been concerned about this issue for nearly 50 years. It is worth delving a bit into this literature with a view to seeing if we can determine when we can and cannot plausibly believe a self-report.

As near as I can tell, the debate was initiated by a paper in Psychological Review by Nisbett and Wilson in Psychological Review in 1977 (Nisbett and Wilson 1977). The abstract says it all:

Evidence is reviewed which suggests that there may be little or no direct intro-spective access to higher order cognitive processes. Subjects are sometimes (a) unaware of the existence of a stimulus that importantly influenced a response, (b) unaware of the existence of the response, and (c) unaware that the stimulus has affected the response. It is proposed that when people attempt to report on their cognitive processes, that is, on the processes mediating the effects of a stimulus on a response, they do not do so on the basis of any true introspection. Instead, their reports are based on a priori, implicit causal theories, or judgments about the extent to which a particular stimulus is a plausible cause of a given response.

There is evidence from many domains in psychology to support this claim. As one of the most convincing, Bargh and Chartrand (1999) reviews dozens of articles in which experimenter induce various states then show their unconscious influence on behaviour. As one popular example, participants first engaged in a word identification task. Unknown to them, one group saw extra words associated with aging (Florida, sentimental, wrinkle). In one study, they walked slower on leaving the lab. In another, given a memory task, they were more forgetful. But none noticed the manipulation.

The study approach was used in medical education. Hanson et al. (2007) studied a group of adolescents who had simulated depression and suicide for an OSCE and showed that their time to walk out of the lab was over twice as much as a control group.

We can cite other examples. Loftus and Palmer (1974) showed how eye witness testimony can be easily manipulated. People who watch a video of two cars colliding and are asked “How fast were the cars going when they smashed?” say 40 m.p.h on average; others who watch the same video and are asked the same question with the verb “contacted” say 31 m.p.h.

Perhaps not surprisingly, the Nisbett paper stimulated a strong defense of verbal introspection by Ericsson and Simon (1980) (of 10,000 h fame). Not ones to use 100 words where 1000 will do, this monumental (36 page) paper makes the case that verbalizing will affect cognitive processes only if people are required to report on aspects of thinking they would not otherwise attend to.

In fairness, they have a point. It’s not a case that we can always believe introspection or that we can never believe introspection. The challenge is to identify the circumstances when we are or are not likely to be able to accurately relate our thinking. As Kellogg (1982) puts it:

Evidence is presented … that when concept learning occurs solely by automatic frequency processing, introspective accounts are inaccurate, but when the nature of the task prompts intentional hypothesis testing, introspective reports are accurate, revealing clues that subjects engage in a conscious hypothesis testing strategy.

If you want to read more. Greenwald and Banaji (2017) presents a comprehensive contemporary account.

The takeaway from this fascinating body of research is that anyone who conducts research in social and behavioural science must, at some point, critically examine the extent to which their methods are truly capturing the thinking processes of their participants, or are simply attending to a post hoc (albeit unintended) interpretation of their actions.

Oh yes. Almost forgot. The Sukhera paper. It was published in the last issue of AHSE. Peer review by experts must prevail. That’s how science works. And it simulated a great discussion at a recent conference.