Background

Any researcher who wishes to become proficient at doing qualitative analysis must learn to code well and easily. The excellence of the research rests in large part on the excellence of the coding.

Anselm Strauss [1]

The analysis and interpretation of qualitative data can make an important contribution to research on implementation processes and their outcomes when such data are interpreted through the lens of implementation theory. These data may be found in documents, interview transcripts, or observational fieldnotes. In broad terms, there are two approaches to integrating qualitative methods and implementation theory. First, by explaining phenomena of interest through procedures that identify and characterise empirical regularities or deviant cases in natural language data through processes of induction [1]. Second, by deriving explanations of relevant phenomena through using structured methods of data analysis that directly engage with existing conceptual frameworks, models, and theories [2,3,4,5]. These are not mutually exclusive ways of working, and they are often combined. In this paper, we focus on developing tools for the second approach, in which a more structured approach to qualitative data analysis [6] was formed into a coding manual that supports researchers using Normalisation Process Theory (NPT) [7,8,9,10,11] in studies of implementation processes.

NPT provides a set of conceptual tools that support understanding and evaluation of the adoption, implementation, and sustainment of socio-technical and organisational innovations. NPT takes as its starting point that implementation processes are formed when actors seek to translate their strategic intentions into ensembles of beliefs, behaviours, artefacts, and practices that create change in the everyday practices of others [8, 11]. The central questions that follow from the application of NPT are always, what is the work that actors do to create change? How does this work get done? And, what are its effects? Because NPT has its origins in research on the implementation of complex healthcare interventions, it does not see the intervention as a thing-in-itself, but rather as an assemblage or ensemble of beliefs, behaviours, artefacts, and practices that may play out differently over time and between settings [8]. It is supported by empirical studies using both qualitative and quantitative methods and by systematic reviews that have explored its value in different research domains [12,13,14].

Development of the coding manual was informed by the application of methods of qualitative content analysis described by Schreier [2]. This approach can be defined as ‘a research method for the subjective interpretation of the content of text-data through the systematic classification process of coding and identifying themes or patterns’ [3] and as ‘any qualitative data reduction and sense-making effort that takes a volume of qualitative material and attempts to identify core consistencies and meanings.’ [4]. As qualitative content analysis has become more widely used, so too have coding frameworks and manuals that define the ways that data are identified, categorised, and characterised within a study. In qualitative content analysis, researchers are encouraged to develop manuals that describe and explicate definition of the ‘rules’ for coding and categorising data [5]. The process of categorisation that follows from using a coding manual is useful because it enables researchers to manage the cognitive burden of searching for and handling multiple constructs and thus enables them to manage a greater cognitive burden of interpretation. Within research teams, coding manuals support the quality and rigour of coding by providing ‘rules’ that are employed by each team member and, in this way, can ensure the consistency of coding. Parsimony can be important too: more is not necessarily better in qualitative investigation and analysis. Reducing the number of codes to those that represent core constructs can be understood as what Adams et al. [6], in a different context, have called ‘subtractive transformation’.

A generalizable NPT coding manual is of value to researchers from a range of disciplines interested in the ways that implementation processes play out. It provides a consistent set of definitions of the core constructs of the theory, shows how they relate to each other, and enables researchers doing qualitative content analysis together to work within a common frame of analysis (for example, in qualitative evidence syntheses, or in team-based qualitative analysis of interview or observational data). In the future, as software for computational hermeneutics [15] becomes more widely available and practically workable, a coding manual could also be integrated into the development of topic modelling instruments and algorithms.

Despite their value to researchers, the process of creating rigorous and robust coding manuals for individual studies is rarely described, and generalizable coding manuals are rare. In this paper, we start to fill this gap. We describe the purposes, methods of development, and application of a generalizable coding manual that translates NPT into a more easily usable framework for qualitative analysis.

Methods

NPT has developed over time through contact with empirical studies and evidence syntheses, and this has led to different iterations of the theory. These have been formed through publications that have served three purposes. First, there is a set of papers aimed explicitly at theory-building in which core constructs of NPT have been developed and their implications explored [7,8,9, 16, 17]. Second, there is a set of papers aimed explicitly at theory-translation in which those core constructs have been clarified and refined through methodological research leading to the development of toolkits [18, 19] and survey instruments [20,21,22]. Finally, there is a set of papers that contribute to theory-elaboration through the development of new constructs during empirical studies and systematic reviews. These explain additional aspects of implementation processes [23,24,25].

Translating a set of theoretical constructs into a theory-informed coding manual for qualitative data analysis involves a series of tasks that are, in themselves, a form of qualitative analysis. Qualitative research focuses on the identification, characterisation, and interpretation of empirical regularities or deviant cases in natural language data. The process described here developed organically and opportunistically through these different tasks, as they were conducted, and through discussion amongst authors of this paper. The work of defining key constructs of the theory, assembling these into a framework, and then transforming them into a workable coding manual, was informed by the qualitative content analysis procedures described by Schreier [2].

  1. 1.

    Concept identification. The result of the iterative development of NPT is a body of constructs representing the mechanisms that motivate and shape implementation processes, the outcomes of these processes, and the contexts in which their users make them workable and integrate them into practice. These core constructs of NPT were distributed over papers that developed the theory [7,8,9,10,11, 16, 17, 23,24,25] and in others that developed the means and methods of its application [18,19,20,21,22]. In June 2020, CRM assembled these constructs in a taxonomy of statements (n=149). They identified, characterised, and explained observable features of the collective action and collaborative work of implementation (the taxonomy of NPT statements is presented in the online supplementary material),

  2. 2.

    De-duplication and disambiguation. The taxonomy of 149 statements assembled in selection and structuring work included multiple duplicates, along with ambiguous and overlapping descriptions of constructs. CRM identified duplicate, ambiguous, and overlapping constructs. These were then either disambiguated or eliminated. After this work was competed, 38 discrete constructs were retained to make up a ‘first pass’ coding manual (the ‘first pass’ coding manual is presented in the online supplementary material).

  3. 3.

    Piloting. The ‘first pass’ manual was piloted. CRM used it to code two papers selected from an earlier NPT systematic review. These were comprehensively coded and checked by all the authors of this paper, who critically commented on codes and coding decisions. The same coding manual was then applied to two sets of interview data collected in other studies that were informed by NPT. First, AG coded transcripts of interviews (n=55 with managers, practitioners, and patients) conducted for an evaluation of the accelerated implementation of remote clinician-patient interaction in a tertiary orthopaedic centre during the COVID-19 pandemic. Second, KG coded transcripts of interviews (n=22, with community mental health professionals) conducted for the process evaluation of the EYE-2 Trial (an engagement intervention for first episodes of psychosis employed in early intervention in the community) [26].

  4. 4.

    Further disambiguation. Pilot work demonstrated that the main elements of the ‘first pass’ coding manual were workable in practice. The piloting exercise revealed that the first pass coding manual was hard to use because it was over-complex and because it micro-managed the process of interpretation. This defeated attempts at nuanced interpretation. Additional work to disambiguate constructs and eliminate overlapping or redundant ones was therefore undertaken as we worked through steps 5–8, below.

  5. 5.

    Identification of context-related constructs. Within the coding manual the contexts in which implementation work takes place remained invisible, although taking context into account had been an important element of theory development and elaboration over time [10, 11]. The contexts of implementation can be understood as both structures and processes [27]. To remedy the absence of constructs representing context, CRM returned to the taxonomy of 149 NPT constructs and the first pass coding manual and searched them for salient descriptors of context. Four of these were identified and were added to the manual.

  6. 6.

    Further piloting. The four constructs relating to implementation contexts were piloted ‘in use’ by CRM on a set of interview transcripts (n=36) collected in a study of professionals’ participation in the implementation of treatment escalation plans to manage care at the end of life in British hospitals [28]. It was found that these constructs characterised process contexts effectively.

  7. 7.

    Presentation. The structure of the coding manual was then presented and discussed in a series of international webinars in February–April 2021. Discussion with participants in those webinars assisted in clarifying the ways that NPT constructs fitted together and characterised actual processes and outcomes.

  8. 8.

    Agreement. Once the final structure of the coding manual was laid out, all authors then read and commented on it. This led to further ruthless editing and simplification of the coding manual.

  9. 9.

    Post-submission. Journal peer review is intended to improve papers for publication. In this case, it led to a clearer and more coherent presentation of the methods leading to the development of the coding manual and of the coding manual itself. An important outcome of this process was further simplification of the construct descriptors in Tables 1 and 2. These were also linked to their primary sources in the NPT literature.

  10. 10.

    ‘Living peer review’. Between the initial submission of a manuscript to Implementation Science (2 September 2021) and the finalisation of the manuscript (December 30, 2021), the first draft of the coding manual was viewed or downloaded more than 1600 times from the preprint servers ResearchSquare.com and ResearchGate.net. This led to useful feedback from researchers who began to use the coding manual to do ‘real-world’ data analysis as soon as it became available but who did not have specific NPT expertise. As a result of this ‘living peer review’, further simplification of the descriptions of NPT constructs was undertaken by CRM.

Table 1 NPT coding manual part A: primary constructs—contexts, mechanisms, and outcomes
Table 2 NPT coding manual part B: secondary constructs—mechanisms

Results: a coding manual for normalisation process theory

Working through the procedures described above led to part A of the coding manual for NPT. This is presented in Table 1 and consists of 12 primary NPT constructs. Although it was not originally intended to do so, we found that the final structure of the coding manual sits easily alongside the Context-Mechanism-Outcome configuration of realist evaluation studies [54]. We describe this in Fig. 1. The array of primary NPT constructs took the following form.

  1. 1.

    Contexts are events in systems unfolding over time within and between settings in which implementation work is done (primary NPT constructs: strategic intentions, adaptive execution, negotiating capability, reframing organisational logic).

  2. 2.

    Mechanisms motivate and shape the work that people do when they participate in implementation processes (primary NPT constructs: coherence-building, cognitive participation, collective action, reflexive monitoring).

  3. 3.

    Outcomes are the effects of implementation work in context—that make visible how things change as implementation processes proceed (primary NPT constructs: intervention performance, normative restructuring, relational restructuring, sustainment).

Fig. 1
figure 1

Linking NPT to realist evaluation: implementation contexts, mechanisms, and outcomes

These 12 constructs form a general set of codes that can be applied to almost any textual data whether these are fieldnotes, interview transcripts, or published texts. They are all grounded in empirical research. Part A (Table 1) of the coding manual thus provides a general set of codes or categories that guide analytic work. Each construct is named and briefly described. Additionally, each descriptor is accompanied by an example of the empirical application of the construct in already published work. The aim of a coding manual is to provide guidance about how to interpret data that is often highly nuanced and represents complex and sometimes very dynamic processes at work. However, no theory, framework, or model can generate a set of codes that will infallibly cover all possible features of data. In this case, guidance about interpretation, rather than scriptural authority, is the primary intention of our coding manual. Detailed guidance on the process of coding can be found in work by Strauss [1] and Schreier [2].

More granular possibilities are presented in part B (Table 2) of the coding manual. Here, the four NPT primary constructs related to mechanisms of purposive social action (coherence-building, cognitive participation, collective action, reflexive monitoring), each possess four associated secondary constructs. These secondary constructs provide further and equally empirically grounded codes where the available qualitative data support interpretation at that level of detail. Once again, each construct is named and briefly described, and each descriptor is accompanied by an example of the empirical application of the construct in already published work. However, the use of these 16 secondary constructs in coding is not mandatory, and many papers included in systematic reviews [12,13,14] of NPT studies seem either to have treated them as discretionary or not referred to them at all. They are however valuable and important, and thus have explanatory value, because the mechanisms that motivate and shape implementation processes are often those that are mobilised to overcome perceived problems of context. In NPT, analysis always focuses on purposive social action—the work that people do to enact evidence or innovation in practice—and for this reason, focusing attention on the constructs that characterise action is central to the interpretive task.

Discussion

The purpose of developing this coding manual was to clarify and simplify NPT for the user and to make it more easily integrated and workable in research on the adoption, implementation, and use of sociotechnical and organisational innovations. In qualitative content analysis—as in other forms of qualitative analysis—proliferating constructs can easily make the business of coding ever more microscopic and can mean that it becomes less analytically rewarding. Indeed, the more parsimonious a prescheduled theoretical structure is, the more space it provides for nuanced interpretation and the development of novel categories of data and the analytic constructs that can be derived from them.

In the development of the NPT coding manual described here, we sought to eliminate ambiguity and add workability from the outset. The process of selection and structuring we describe yielded a set of 12 primary NPT constructs (Table 1: coding manual part A) and 16 sub-constructs (Table 2: coding manual part B). As Fig. 1 shows, these identify, characterise, and explain the course of implementation processes through which strategic intentions are translated into practices and enable understanding of how enacting those practices can lead to different outcomes, and to varying degrees of sustainment.

Coding is a centrally important procedure in qualitative analysis [1], but it must be emphasised that it is only one part of a whole bundle of cognitive processes through which researchers make and organise meanings in the data. Here, a coding manual cannot cover all analytic possibilities presented by a qualitative data set. Reflexive procedures for identifying phenomena outside the scope of a theory, developing new codes, and linking them to other explanatory models are always important in theory-informed qualitative work. The act of coding involves descriptive work that is a foundation for the interpretation of data, but it is not a proxy for it nor is the purpose of a coding manual to verify the underpinning theory. The whole purpose of coding, and of linking coding to theory, is to build and inform interpretation and understanding. This is not a discrete stage in data analysis but is continuous throughout [1].

Linking NPT to the CMO model of realist evaluation did not happen by accident. NPT was developed through a series of iterations that were already heading in this direction. This began with empirical studies that led to rigorous analysis of the mechanisms that motivate and shape implementation processes [7, 8]. As the theory was developed and applied, further consideration was given to the problem of contexts [9, 13, 16] and to the question of how mechanisms interact with contexts to produce specific outcomes [11, 24, 25, 55]. At the same time, systematic reviews [12,13,14] revealed that the use of NPT was impeded because researchers without a strong theoretical background in the social sciences needed both clearer definitions of constructs and a conceptual toolkit that linked these together in a way that enabled them to see how implementation mechanisms and contexts interact with each other to shape different kinds of outcomes. Drawing these together in a single-coding manual would assist in solving these problems.

Strengths and limitations

We describe a set of methods likely to be useful be useful to qualitative researchers in other areas of research who wish to consider developing such manuals for other theories (for example, relational inequalities theory [56] or event system theory [57]). A strength of the work was that developing the coding manual was undertaken by an international multidisciplinary team working with personal experience of developing and working with NPT and with other implementation frameworks, models, and theories. This ensured that from the outset the development of the coding manual was closely linked to knowledge about the ways that NPT can be used. An unanticipated consequence of the coding manual being published on preprint servers (ResearchSquare.Com and ResearchGate.Com) was that other researchers started to use it almost immediately and quickly fed back criticism or encouragement. This added value to both the development process and the final product.

This work was undertaken opportunistically and grew organically. The manual thus developed cumulatively and in an ad hoc way. Working from a structured protocol would have added greater methodological transparency and perhaps also potential for replication of the development process. Finally, researchers working from different perspectives, with different experiences of NPT, using primary empirical studies rather than theory papers, or working from a prescheduled protocol, might have produced a different coding manual.

Conclusion

This paper describes the procedures by which the NPT coding manual for qualitative research was produced. It also presents the manual ready for use. But more than this, the process of producing the coding manual has also led to the simplification and consolidation of the theory by bringing together empirically grounded constructs derived from multiple iterations of theoretical development over two decades.

Coding manuals are useful tools to support analysis in qualitative research. They reduce cognitive load and at the same time render the assumptions underpinning qualitative analysis transparent and easily shared amongst teams of researchers. The coding manual makes the application of NPT simpler for the user. This adds value to qualitative research on the adoption, implementation, and sustainment of innovations by providing a stable, workable, set of constructs that sit comfortably alongside the well-established model of realist evaluation [54]. It also forms a translational framework for researching and evaluating implementation processes and thus complements other resources for NPT researchers such as the NPT Toolkit and the NOMAD survey instrument [17, 19, 21, 22].