It seems that this month, every time I turned around, someone was talking about educational myths. Here are some examples:

  1. 1.

    There is a special issue of Medical Education in the offing that will be devoted to educational myths, with a series of invited papers.

  2. 2.

    I was honoured to deliver the keynote address for Division I, American Educational Research Association. My talk began with 10 assertions, which were all ultimately shown to be educational myths

  3. 3.

    I recently came across a book be De Bruckyere, Kirshner and Hulshof called Urban Myths about Learning and Education (2015) which lists a total of 35 myths.

Although the substance of this editorial is not to create a Reader’s Digest condensed article on educational mythology, let me give you a few, and also highlight some key articles that dispel some of these myths. For many of you this may be old hat, and can be greeted with a stifled yawn, but perhaps there will be a few surprises.

  1. 1.

    Learning styles

    Assertion:

    Individual students have different learning styles. Some learn better visually; others are verbal learners. An effective teacher must take individual learning styles into account

    Evidence

    Learning style is the poster child of educational myths. The evidence is completely consistent that however you define learning style (and there are myriad ways to do so), matching learning style yields no gains in learning (Pashler et al. 2009)

  2. 2.

    Critical thinking and problem-solving

    Assertion:

    “Virtually everyone would agree that a primary, yet insufficiently met, goal of schooling is to enable students to think critically” (Willingham 2007).

    Evidence

    There have been a number of review articles, dating back to 1989 (Perkins and Salomon 1989), that the major determinant of problem-solving is application of relevant knowledge. As Willingham (2007) says:

    [People think that] ..critical thinking …is a skill, like riding a bicycle, and that, like other skills, once you learn it, you can apply it in any situation. Research from cognitive science shows that thinking is not that sort of skill. The processes of thinking are intertwined with the content of thought (that is, domain knowledge) (Willingham 2007).

  3. 3.

    Simulation

    Assertion:

    To maximize learning from simulation, it should be as authentic (high fidelity) as possible to ensure transfer to the clinical setting

    Evidence:

    In fact, the evidence is consistent that “high fidelity” simulations provide only marginally and not significantly better learning than well-designed “low fidelity” simulations (Norman et al. 2012). And under some circumstances, they can result in worse learning (Chen et al. 2015).

  4. 4.

    E-learning

    Assertion:

    E-learning has clear and consistent advantages over alternative approaches. Today’s students learn better in a virtual environment.

    Evidence:

    Like a lot of curriculum level interventions, e-learning is far better than nothing but no better than something. This was demonstrated robustly by Cook et al. (2011) is a systematic review of 214 studies.

  5. 5.

    Multiple Cboice Tests

    Assertion:

    Too much time is spent on knowledge tests. A score on a multiple choice test gives the student little information to help her learn and study better.

    Students “learn to the test”.

    Evidence:

    In fact, practice multiple choice tests have been repeatedly shown to enhance learning over an equivalent amount of time in self-study—“test enhanced learning”. (Larson et al. 2008).

  6. 6.

    The Millennium generation

    Assertion: Modern students are highly effective multi-taskers. “Children growing up now might have an associative genius we don’t—a sense of of the way ten projects all dovetail into something totally new”. (Anderson, 2009 in Kirshner 2013)

    Evidence

    Not surprisingly, their brains aren’t wired any differently. No one, of any age, can multitask unless one task is automated, like walking or driving. They, as we, task-switch from the lecture to the internet, and this comes at a cost in time and distraction (Kirschner and van Merriënboer 2013)

I have used variations on these questions in a talk I have given numerous times to now hundreds of people—clinical teachers and educators. The number of people who correctly identified that all these assertions are, in my view, false, easily fits on fingers of one hand.

Roediger has commented on this phenomenon, where the education community repeatedly falls prey to seemingly sensible, but ultimately incorrect, assertions:

The field of education seems particularly susceptible to the allure of plausible but untested ideas and fads (especially ones that are lucrative for their inventors). One could write an interesting history of ideas based on either plausible theory or somewhat flimsy research that have come and gone over the years. And….. once an idea takes hold, it is hard to root out. (Roediger 2013, p. 2)

Inevitably, if you did recognize most of these falsehoods, you might just be feeling a bit of snugness at this point. But let me suggest that smugness is not the right response. Quite the opposite, the fact that so many educators, in so many institutions, are making curriculum decisions based on inadequate evidence, incorrect evidence or none at all, particularly when good evidence leading to an opposite conclusion exists, is cause for serious disquiet.

Now I am not advancing a conspiracy theory. I do not think that people who advocate courses in critical thinking are so cynical as to press on with activities they know are wrong. Not at all; I am quite certain they truly believe in what they are doing. So how did they come to believe it? Is it just that they don’t care enough to be well informed? Or have we, as professional educators, let them down? As Charlie Brown says:

We have discovered the enemy and it is us

Certainly, the finding the many engaged in education are relatively ignorant of some important research findings is not a new idea. Educational researchers have been concerned about “knowledge translation” to educational practitioners for many decades. A number of strategies have been attempted. Journals like Medical Teacher incorporate a number of strategies to help teachers apply some of the findings from research. Conferences frequently run adjunct sessions like the Essential Skils in Medical Education courses at AMEE. The FAIMER initiative is explicitly directed to increasing educational research skills and knowledge of health science educators in the developing world. Master’s and Fellowship programs have proliferated around the world and new programs come on stream every year.

Perhaps the most direct attempt to bring current research knowledge to educational practice is the BEME (Best Evidence Medical Education) collaboration. On their webpage, BEME is described as:

  • The Best Evidence Medical Education (BEME) Collaboration (Harden et al. 1999) is an international group of individuals, universities and professional organisations committed to the development of evidence informed education in the medical and health professions through:

    • the dissemination of information which allows teachers and stakeholders in the medical and health professions to make decisions on the basis of the best evidence available;

    • the production of reviews which present the best available evidence and meet the needs of the user; and

    • the creation of a culture of best evidence education amongst individuals, institutions and national bodies.

  • BEME’s goal is to provide and to make available the latest findings from scientifically-grounded educational research. This will enable teachers and administrators to make informed decisions about the kinds of evidence-based education initiatives that boost learner performance on cognitive and clinical measures.

To date, the BEME Collaboration has published 47 systematic reviews of topics in medical education. Some, like Issenberg’s et al. (2005) review of simulation, have been highly cited. However, many have not; a review I did in 2013 found that the average number of citations of BEME reviews was 8.5 (this may well have changed by now)

The problem may lie in part because of the nature of systematic reviews, which are effective to address “Does it work?” questions with well-defined interventions and outcomes, which are easily searched. But education is not like that; often the questions are ill-defined like “Is inter-professional education effective?” What is inter-professional education? Is it the same at all educational levels? For all professions? How do you measure effect? Inevitably, given the heterogeneity of populations, interventions and outcomes, the answer will be a long string of “It depends”—hardly useful as guidelines to practitioners. Moreover, reviews in education seem to run out of studies long before the answers are in. A BEME review of educational games located 11,567 articles but only retained 5. Turning it around, in looking at the assertions I began this editorial with, in only one of the 6 was the definitive article (in my view) a systematic review.

How can we bring the researcher and practitioner communities together? I expect that efforts such as the special issue of Medical Education and the BEME reviews, which are both directed at “Knowledge translation” (ugh, what an awful term), may have limited impact, simply because the vehicle of dissemination is likely to be read primarily by the research community—the converted.

Recently van Enk and Regehr (2017) suggested that a recognition that health science education is a field, not a discipline, where both theoretical inquiries and practical questions are on equal footing, may serve to bring the practitioners and researchers together. Perhaps. But perhaps not. I am not sure that a White Paper on health science education will have a lot of impact in the diverse communities we serve.

At another level, I earlier highlighted the many international programs such as FAIMER and ESME that explicitly have a mandate to improve the education community’s awareness of critical research findings. We are also seeing a proliferation of fellowship programs, and formal graduate programs at the Master’s and PhD levels. While a few of their graduates will go on to be key players in research, the hope is that far more will become better-informed educational leaders in their own institutions.

But these programs are inevitably somewhat detached from ongoing curriculum activities in health sciences programs. Perhaps local initiatives involving teachers in each institution may be more effective. This is historically the mandate of Faculty Development offices.

But I fear that this too is insufficient. One simple example. In the 50 odd years I have been at McMaster, faculty development and educational research have always been located in close physical proximity. But even when I was in charge of research, conversations with my counterpart in faculty development were pretty well restricted to snatched exchanges over wine at faculty receptions. Perhaps I am to blame; perhaps at other institutions faculty development and research interact seamlessly. Certainly many of my colleagues in the research in medical education community do engage in formal faculty development activities. But I suspect they are the exception more than the rule, if for no other reason than there are far more medical schools than there are offices of research in medical education. Moreover, the goals and skills of faculty developers and researchers are often as different as the microbiologist in the lab and the family doc treating a kid with a strep throat.

There will be no simple solutions. Maybe things will change and we just need more time. Perhaps my angst simply reflects an impatience born of a recognition that I do not have decades of time ahead of me to right these wrongs. But it remains distressing that so many educational activities arise from theories and models that have been discounted by evidence. At best this is inefficient; at worst, potentially harmful.