Keywords

In a Word The true value of a conference lies in its effects on participants. Conferences are to generate and share knowledge that impacts behavior and links to results: this will not happen if the state of the art of conference evaluation remains immature and event planners do not shine a light on the conditions for learning outcomes.

No Loose Change

Lest we forget, a conference is a purposeful gathering of people aiming to pool ideas on at least one topic of joint interest or needing to achieve a common goal through interaction (and, naturally, relation). They are face-to-face, sometimes virtual,Footnote 1 venues for situated learning dedicated to the generation and sharing of knowledge, usually to reach agreement, in formal or informal (yet planned) settings.

Minds that have nothing to confer find little to perceive.

—William Wordsworth

Conferencing, then, is an age-old technique for reasoning and problem solving, aka sense making, the process by which people give meaning to experience through spoken and written narratives. Certainly, the Socratic Method—a debate between individuals with opposing views that used effective questions to stimulate critical thinking—was a form of it (and the oldest known way of teaching).

Meetings are a great trap. Soon you find yourself trying to get agreement and then the people who disagree come to think they have a right to be persuaded. However, they are indispensable when you don’t want to do anything.

—John Kenneth Galbraith

Nowadays, new modes of transport and communication mean that conferences can take many forms including (i) conventions —large meetings of delegates, industries, members, professions, representatives, or societies seeking concurrence on certain attitudes or routines, such as processes, procedures, and practices; (ii) forums —broad occasions for open discussion, as a rule among experts but now and then involving audiences; (iii) seminars —prolonged and sometimes repeated meets for exchange of results and interaction among a limited number of professionals or advanced students engaged in intensive study or original research; (iv) workshops —brief educational programs for small groups of peers focusing on techniques and skills in a particular field; (v) retreats —periods of group withdrawal from regular activities for development of closer relationships, instruction, or self-reflection; and (vi) meetings Footnote 2—sundry instances of coming together for business, civic, courtship, educational, government, health and wellness, leisure, religious, social, sports, and other functions.

Conventions , forums , seminars , workshops , retreats, and meetings—to which the emerging practice of “unconferencing”Footnote 3 should hereafter be added—are a pervasive form of interaction. The resources allocated to their organization, conduct, and attendance—of which the opportunity cost incurred from taking part is no loose change—must surely be astronomical. Even so, we seldom assess their relative value to either participants or event planners. (Run-of-the-mill, end-of-session surveys requesting participants to jot down what they enjoyed or disliked—namely, to log reactions —will no longer do.) Granted that conferences serve different purposes, these Knowledge Solutions concentrate on gatherings that are ostensibly designed to generate and share (relevant, effective, and therefore valued) knowledgeFootnote 4 and leverage related networking in support, such as forums and seminars . [That said, given the claims that other meetings make about knowledge generation and sharing—pace the disconnect between their means and ends, it stands to reason (and would indeed be logical) that these Knowledge Solutions also apply there.]

The Poverty of Conference Evaluations

True genius resides in the capacity for evaluation of uncertain, hazardous, and conflicting information.

—Winston Churchill

Questionnaires are synonymous with conference evaluation. (Indeed, few other tools seem to be used.) In all probability, the language that event planners employ to allegedly gauge conference satisfaction —they hardly ever dare establish outcome and impact—will read: “Thank you for taking time to complete this survey. Your opinion is important: it will inform plans for the next event.” All too predictably when some pretense at conference evaluation is in fact made, the following “key” questions will be posed: Did the event lead to its goal?Footnote 5 What were its main strengths and weaknesses? What did you value in the event?Footnote 6 Were the sessionsFootnote 7 relevant to the subject matter? How well did they align with your expectations? Can you rate the quality of the presenters ? Has your knowledge of the subject matter increased as a result of the event? Will the event set in motion changes in the way you work in the future?Footnote 8 What undertakings can you now initiate? How might the event be improved? (For sure, there will also be open fields inviting further suggestions for improvement.) Be these as they may, the politically incorrect question must be asked: What might data compiled from chiefly formative, not summative, quizzes possibly help validate or change in any meaningful way?Footnote 9 Stating the obvious, feedback that cannot be used should not be sought. Ironically, since a dog’s tail should not wag its owner, what practical recommendations for improvement are proposed will probably be turned down, with thanks, as great but simply not possible given this or that constraint.

There has, of course, been much debate over the near-universal reliance on questionnaires for conference evaluation. Detractors wonder if they really provide worthwhile information Footnote 10; adherents research how to obtain a representative cross-section of attendants since, more often than not, there is no strong motivation to respond—they remark that surveys can (at low cost) ensure at least summarily uniform coverage of all information areas deemed essential; provide an opportunity to triangulate results using different techniques; and allow the same questions to be submitted in the same way year after year so that evaluation results can be compared against a baseline. Innovators advocate “recent life histories” that highlight the event’s influence on selected individuals, for example in terms of education, networking, professional development, and application of knowledge gained; or “roving reporters” who would converse with participants throughout the event with a mix of demographic, short-answer, and open-ended questions.Footnote 11

figure b

Fig. Linking conferences to results. Source Author

One test of the correctness of educational procedure is the happiness of the child.

—Maria Montessori

Put bluntly, the value that conference evaluations add is incongruously scant. In declining order of interest—with variations depending on the sector, theme, and discipline addressed, evaluations home in on (i) the overall reactions of participants, (ii) conference strengths and weaknesses, (iii) ratings of sessions and presentations, (iv) ratings of the extent to which the needs of participants were met, (v) areas for improvement, (vi) financial return on investment, (vii) participant learning in the short term, and (viii) new behaviors in the medium term. The case must be made that the last two areas demand more attention. And, there surely is scope for Donald Kirkpatrick’s four levels of learning evaluation, even if they were developed in 1959 for the evaluation of training programs (Kirkpatrick D and Kirkpatrick J 2006). With minor modifications to adapt them to the context of conferences, the levels are as follows:

  • Reaction—To what degree do participants react favorably to an event?

  • Learning—To what extent do participants acquire the intended knowledge, skills, attitudes, confidence, and commitment based on their participation in the event?

  • Behavior—To what degree do participants apply what they learned during the event when they return to their job?

  • Results —To what extent do targeted outcomes occur as a result of the event and subsequent interaction and relation?

The Poverty of Learning in Conferences

No grand idea was ever born in a conference, but a lot of foolish ideas have died there.

—F. Scott Fitzgerald

Most conferences are called to achieve a shared goal—that, ultimately, being collaborative learning that links to results —yet dispense at best information ; they do not generate knowledge. Participants depart with their own learningFootnote 12—that, as noted above, is rarely evaluated and, in the first instance, not necessarily shared. This is because most conferences funnel programmed information; they do not know-what potential collaborative learning, if any transpired, could enrich theory, research, and practice in their domain as a whole. Why? Chapman et al. (2006) remark that event planners assert they want to create spaces for learning but do not evaluate if that, and the changes in behavior linking to results it should conduce, actually did occur. Rather they aggregate individual responses, thereby missing opportunities for subtler analyses of more diverse inputs. Helpfully, Chapman et al. remind us that, from an etymological perspective, to evaluate is to ascertain or fix the value of something; more profoundly, and typically after careful appraisal and study, evaluation helps establish its significance, worth, or condition. The first definition suggests determination of positive or negative effects; the second embraces the idea of determination of condition, which removes the requirement to assign worth. Evaluation techniques that rest on the first definition serve accountability; those that spring from the second propel learning. Chapman et al. posit a three-pronged “New Learning” conceptual framework integrating notions of learning organizations, communities of practice, and knowledge creationFootnote 13 to facilitate learning in conferences—not forgetting their evaluation—which uncovers fertile ground for research and practice.

I believe we are going to move into a situation where the more effective conferences will be smaller, more specialized, more focused, with occasional large gatherings to get the attention of the larger world.

—Maurice Strong

The nascent practice of unconferencing, cited earlier, bodes well too. Summarizing, the shortcomings of conferences are that: (i) conference programs are set by event planners and do not predict well what sessions are actually wanted; (ii) a distinction is made between presenters (teachers) and participants (learners ); (iii) sessions are dominated by presenters; participants receive predetermined information passively; (iv) logistics revolve around general and breakout sessions; (v) content is broadcast in long, uninterrupted sessions; and (vi) chances to network are restricted to meals and social gatherings outside sessions. In contrast, some characteristics of unconferences are that: (i) the culture of unconferences is participatory, not passive; (ii) the intellectual capital of participants, not presenters, is harnessed; (iii) unconferences give time for individualized knowledge sharing and learning : the intent is not just to work toward the goal of the event; (iv) knowledge sharing and learning happen in small groups rather than in sessions; (v) interaction is put center stage; (vi) participants have greater input and control over sessions and are thus more apt to engage in knowledge sharing and learning that help realize the goal of the event; (vii) teaching and learning roles are not fixed; and (viii) sessions can be created on the spot. To note, however, event planners still do not take advantage of unconferencing despite improved connectivity; the chief explanation is fear that unconferences will not work, fuelled by understandable concern over loss of control over one’s event and general unfamiliarity with associated facilitation requirements, technical and logistical considerations, and revenue models.