Background

There is currently a wide gap between what treatments have been found to be efficacious in randomized controlled trials and what treatments are available in routine clinical care. One comprehensive theoretical model of dissemination and implementation of healthcare innovations intended to bridge this gap was developed by Greenhalgh et al.[1]. Derived from a systematic review of 13 distinct research traditions[2, 3], this model is both internally coherent and based largely on scientific evidence. The model is consistent with findings from other systematic narrative reviews[46] regarding the factors found to be related to implementation. In addition, it served as the starting point for development of the Consolidated Framework for Implementation Research[7].

As shown in Figure1, implementation is viewed as complex processes organized under six broad constructs: innovation; adopter; communication and influence; system antecedents and readiness (inner organizational context); outer (inter-organizational) context; and implementation process. However there are no explicit recommendations for operational definitions or items to measure most of the identified constructs. The authors recommend a structured, two-phase approach for capturing their model[1]. For phase one, they advised assessment of specific individual components of the model (i.e., perceived characteristics of the innovation, adopter characteristics). For the second phase, they proposed construction of a broad, unifying meta-narrative of how these components interact within the social, political, and organizational context[8].

Figure 1
figure 1

Greenhalgh and colleagues (2004) model of Implementation processes.

In order to advance toward a testable theory and thus benefit implementation science, an operationalization of key constructs and their measurement is needed. Articulation of this model may also aid the implementation process in other ways. For example, administrator or treatment developers may ask providers to complete these measures in order to understand individual and organizational barriers to implementation and to identify strengths that can help teams overcome these challenges. This information can then be used to inform design of training, help promote provider engagement in evidence-based innovations, assist in problem-solving with obstacles, and guide development of the implementation process.

Our research group set out to operationalize the constructs in Greenhalgh et al.’s[1] model for use in a quantitative survey and a semi-structured interview guide (a full copy of the survey can be found in Additional file1 and a full copy of the semi-structured interview in Additional file2). The present paper provides the background, rationale, working definitions, and measurement of constructs. This work was done in preparation to study a national roll-out of two evidence-based psychotherapies for post-traumatic stress disorder (PTSD) within the Department of Veterans Affairs (VA)[9]. Although the questionnaire and interview guide were developed to assess factors influencing implementation of specific treatments for PTSD, they can likely be adapted for assessing the implementation of other innovations. This systematic effort represents a first step at operationalizing constructs in the Greenhalgh model.

Methods

Construction of measures: systematic literature search and article review selection process

Measure development began with a systematic literature search of keywords representing 53 separate sub-constructs from the six broad constructs (innovation, adopter, communication and influence, system antecedents and readiness, outer context, and implementation process) identified in Figure1. Only those constructs that were both related to implementation of an existing innovation (rather than development of an innovation) and were definable by our research group were included.1 Searches were conducted in two databases (PsycInfo and Medline) and were limited to empirical articles published between 1 January 1970 and 31 December 2010. Search terms included the 53 sub-constructs (e.g., relative advantage) and ‘measurement’ or ‘assessment’ or ‘implementation’ or ‘adoption’ or ‘adopter’ or ‘organization.’ After culling redundant articles, eliminating unpublished dissertations and articles not published in English, we reviewed abstracts of 6,000 remaining articles. From that pool, 3,555 citations were deemed appropriate for further review.

Two members (CO, SD) of the investigative team conducted a preliminary review of titles and abstracts for possible inclusion. Articles were selected for further consideration if they proposed or discussed how to measure a key construct. Clear and explicit definitions of constructs were rarely provided in the literature, resulting in our inclusion of articles with concepts that overlapped with one another. From the review of titles and abstracts, 270 articles were retrieved for full text review. If the actual items from the measure were not provided in the paper, a further search was made using cited references. The investigative team also reviewed surveys that had been used in studies on health providers’ adoption of treatments[1012] and organizational surveys related to implementation[13, 14].

We next developed a quantitative survey and semi-structured interview using an iterative process whereby the full investigative team reviewed potential items. In order for inclusion of an item in our measurement approach, all members of the team had to agree. The resulting pool of items was presented to 12 mental health professionals who offered feedback on item redundancy and response burden. Items were further revised by the team for clarity and consistency. In addition, our team concluded that it would be burdensome to participants if we included items reflecting every aspect of the model in the quantitative survey. Therefore, we made strategic decisions, described below, as to which items to retain in the survey versus the semi-structured interview. Certain constructs in the Greenhalgh model appear under more than one domain (e.g., social network appears under both adopter and communication and influence) or assess overlapping constructs (e.g., peer and opinion leaders). For certain constructs the use of administrative data was deemed as the most efficient means of assessment and served to augment survey or interview questions (e.g., incentives and mandates, environmental stability).

Results

Table1 presents the constructs and working definitions as well as a sample item for each. For each construct, an overview of relevant measures is provided followed by explanation of the measures that ultimately influenced our survey and semi-structured interview questionnaires, or for relevant constructs the use of administrative data.

Table 1 Model constructs and examples of survey and interview questions and administrative data

Innovation

The five innovation attributes originally identified by Rogers[2] and included in the Greenhalgh et al. model are: relative advantage, compatibility, complexity, trialability, and observability. Additional perceived characteristics given less emphasis by Rogers but included by Greenhalgh et al. are potential for reinvention, risk, task issues, nature of the knowledge required for use, and augmentation/technical support.

Several investigators have attempted to operationalize Rogers’ innovation attributes[1418]. The approach most theoretically consistent with Rogers was constructed by Moore and Benbasat[19], but this was not developed for application to a healthcare innovation[20, 21]. The 34- and 25-item versions of that scale have high content and construct validity and acceptable levels of reliability. Our group used several items from the Moore-Benbasat instrument that were deemed applicable to mental health practice (i.e., complexity, observability, trialability, compatibility) and reworded others to be more relevant to healthcare treatments (e.g., ‘The treatment [name] is more effective than the other therapies I have used’).

Others have also assessed Rogers’ innovation characteristics. A questionnaire by Steckler et al.[17] further informed the phrasing of our survey items for content and face validity. Content from additional sources[14, 18, 22, 23] was deemed not applicable because it examined socio-technical factors, deviated too far from the constructs, or did not map onto measurement of a healthcare practice.

Items concerning potential for reinvention were not taken from existing surveys as most focused on identifying procedures specific to a particular intervention[24]. Thus, we were influenced by other discussions of reinvention as they more broadly applied across implementation efforts[25]. In particular, our items were constructed to assess providers’ reasons for making adaptations. As a perceived attribute of innovation, risk refers to uncertainty about the possible detrimental effects. Existing tools for assessing risk focus on the adopter rather than the innovation[26, 27]. Thus, we reviewed these instruments for the adopter characteristics (presented below) as well as utilizing them to inform our items for risk.

The limited literature on nature of knowledge involves how information is instrumentally used both for problem solving and strategic application by adopters[28]. However, Greenhalgh viewed nature of knowledge as whether an innovation was transferable or codifiable. This required us to craft our own items. Assessment of technical support is typically innovation specific, such as adequate support for a technology or practice guideline[29, 30]. Technical support needed to acquire proficiency is likely different across innovations (i.e., training support), and thus we included items on the helpfulness of manuals and accompanying materials. Davis[31] developed a reliable and valid instrument to assess perceived usefulness (i.e., belief that the innovation enhances job performance). Although the construct has a different label, we judged it as nearly identical to Greenhalgh’s task issues. One item was borrowed from this scale to represent task issues.

All innovation attributes in the Greenhalgh model were represented in the quantitative survey. A couple (e.g., technical support) were also included in the semi-structured interview.

Adopter characteristics

Greenhalgh et al.[8] suggested that a range of adopters’ psychological processes and personality traits might influence implementation. Items specifically identified in the model include adopter needs, motivation, values and goals, skills, learning style, and social networks[8]. Not all proposed adopter characteristics were depicted in the model figure; Greenhalgh[1] identified other potentially relevant adopter characteristics such as locus of control, tolerance of ambiguity, knowledge-seeking, tenure, and cosmopolitan in the text.

There was a lack of operational definitions in the literature regarding need; thus, we created our own. Assessment of this construct was informed by questions from the Texas Christian University Organizational Readiness to Change survey[12]. We included one item in our survey specific to need in the context of professional practice.

Assessing motivation or desired levels and ‘readiness for change’ has most often been based on the transtheoretical stages of change model[3234]. One of the most widely used tools in this area is the University of Rhode Island Change Assessment Scale[33], which has items assessing pre-contemplation (not seeking change), contemplation (awareness of need for change and assessing how change might take place), action (seeking support and engaging in change), and maintenance (seeking resources to maintain changes made). We adapted items from this scale for our survey. Continued development of the stages of change model after construction of the Change Assessment Scale incorporated an additional preparation stage, which we represented in the qualitative interview as a question regarding providers’ interest in and attendance at trainings in evidence-based treatments.

Assessment of values and goals typically reflect estimation of personal traits/values (e.g., altruism) and terminal goals (e.g., inner peace)[34]. Funk et al.[35] devised a survey that included some adopter characteristics in relation to utilizing research-based innovations in healthcare settings. We used an item from their survey[35] as well as one from the Organizational Readiness to Change-Staff Version survey[12] to operationalize this construct.

The preferred means of assessing skills in healthcare practice is observational assessment as opposed to self-report[36, 37]. However, in order to capture some indication of skill, we simply added an ordinal item on level of training in the evidence-based treatment.

Greenhalgh et al.[1] provided no formal definition of learning style. We reviewed numerous learning style measures[3845], but most had poor reliability and validity[46]. Others had attempted to revise and improve upon these instruments with limited success[47, 48]. Recently, an extensive survey of learning style was created[49]. Although we did not utilize these items due to their lack of reflection of learning processes (e.g., auditory), we did follow the suggestion to directly word items about preference of instructional methods[49] (for reviews see[50, 51]). Due to the potential complexity of this construct and the various ways to measure it, we included three diverse items not expecting them to necessarily represent one scale and also assessed this in the interview.

Measurement of some of the adopter traits has occurred in the larger context of personality research. For example, there are several measures of locus of control[5254]. After a review of these tools and discussion as to what was most applicable to the implementation of healthcare innovations, our group primarily borrowed items from Levenson’s[53] Multidimensional Locus of Control Inventory. The Levenson inventory includes three statistically independent scales that allow a multidimensional conceptualization of locus of control unlike the widely used Rotter scale, which is unidimensional and conceptualizes locus of control as either internal or external. The Levenson scale has strong psychometric properties[53]. Unlike other LOC scales, it is not phrased to focus on health and therefore appeared more easily applied to measure LOC as a general personality factor. Similarly, numerous surveys of tolerance (and intolerance) for ambiguity have been developed[5561]. After reviewing these measures, we chose to adapt items from McLain’s[59] Multiple Stimulus Types Ambiguity Scale due to its relevance to healthcare.

For knowledge-seeking, we adapted one additional question from the Organizational Readiness to Change-Staff Version survey[12] and devised two of our own. Tenure has consistently been measured as a temporal variable[6264]. A clear distinction can be made between organizational and professional tenure. For the purposes of our survey, both organizational tenure[64] and professional tenure were included.

One means of assessing cosmopolitanism is by identifying belonging to relevant groups[65]. Woodward and Skrbis’[66] assessment of cosmopolitanism informed the construction of our items. Pilcher[65] differentiated between two conceptualizations of cosmopolitanism: ‘subjective/identity’ and ‘objective/orientation,’ where the former captures affiliations and the latter relevant attitudes. We followed a more ‘subjective/identity’ approach by including one survey item, capturing how many professional meetings one attends per year[67].

Communication and influence

Communication and influence constructs in the Greenhalgh model included in the survey are: social networks, homophily, peer opinion (leader), marketing, expert opinion (leader), champions, boundary spanners, and change agent.

One of the most common measures of social networks is a name generator response used to map interpersonal connections[6870]. Relatedly although there are several ways that peer opinion leaders have been assessed[3, 71], the most common is to ask respondents from whom they seek information and advice on a given topic. We included a name generator in the survey to identify social networks as well as items asking about peer relationships. Similarly we asked one item to assess whether a provider had access to a peer opinion leader. This latter item is modeled after the Opinion Leadership scale, which has adequate reliability[72].

Since there was no psychometrically sound measure of homophily in the literature[73], we chose to capture this construct from the interview data in regards to the degree to which providers in a particular program had similar professional and educational backgrounds and theoretical orientations. Similarly, there was no identified measure of marketing, thus we crafted one question for the interview.

While the terms expert opinion leader, change agent and peer opinion leader are often used interchangeably and inconsistently[8], we were careful to create distinct definitions and measurements for each of these. In regards to measurement of an expert opinion leader, in the interview, we assessed access to an expert consultant, and in the survey, we ask if the provider themselves is a consultant or trainer in the treatment.

Innovation champions play multiple roles in promotion (e.g., organizational maverick, network facilitator[1, 15, 74]). Our team assessed this construct in the interview by initiating a discussion of how the innovation was promoted and by whom.

The construct of boundary spanners has received minimal application in studies of implementation in healthcare settings[75]. Because there were no available tools for this construct, we modeled our items from the definition of boundary spanners—individuals who link their organization/practice with internal or external influences, helping various groups exchange information[76]. We also utilized one question to capture the concept of whether providers were affiliated with or were themselves boundary spanners.

The interview also included questions to identify the influence of a change agent by asking about decision-making responsibility in the organization as well as facilitation of internal implementation processes. Thus, while only a limited number of constructs within the communication and influence section were included in the survey, many of the concepts seemed best captured through dialogue and description and thus were included in the interview.

System antecedents and readiness for innovation (inner context)

The constructs that comprise the inner and outer organizational context overlap considerably, making sharp distinctions difficult[6, 77]. Greenhalgh identified two constructs of inner context: system antecedents (i.e., conditions that make an organization more or less innovative) and system readiness (i.e., conditions that indicate preparedness and capacity for implementation).

As can be seen in Figure1, system antecedents for innovation include several sub-constructs organizational structure (size/maturity, formalization, differentiation, decentralization, slack resources); absorptive capacity for new knowledge (pre-existing knowledge/skills base, ability to interpret and integrate new knowledge, enablement of knowledge sharing); and receptive context for change (leadership and vision, good managerial relations, risk-taking climate, clear goals and priorities, high-quality data capture). In a review of organizational measures related to implementation in non-healthcare sectors, Kimberly and Cook[14] noted few standardized instruments.

Measurement of organizational structure has typically used simple counts of particular variables. Although this appears straightforward, providers may be limited in their knowledge of their organizational structure[14]. Thus organizational structure and its sub-constructs deemed best captured through the interview and administrative data sources. For our investigation of national roll-outs of two evidence-based psychotherapies, we were also able to integrate existing data routinely collected by the VA’s Northeast Program Evaluation Center (NEPEC). NEPEC systematically collects program, provider, and patient level data from all specialized behavioral and mental health programs across the US[78, 79], allowing us to assess a number of organizational constructs.

Capitalizing on NEPEC administrative data, we were also able to capture size/maturity as program inception date, number of available beds and number of patients served in past-year, and number of full-time providers at various educational levels. Formalization was represented by program adherence to national patient admission, discharge, and readmission procedures, as well as through interview discussion regarding provider clarity around the organizational rules for decision-making and implementing changes. Differentiation or division among units was examined through providers’ descriptions on the structured interview of separations between staff from different backgrounds (e.g., psychology, nursing) as well as how different staff sectors communicated and shared practices (e.g., outpatient and residential).

Although there is no standardized measure of decentralization, we devised our own regarding dispersion of authority in decision making around the innovation. Additionally, there are no uniform instruments on slack resources. NEPEC data were used to capture staff to patient ratio and program capacity (including number of unique patients and number of visits).

For absorptive capacity for new knowledge, we devised items or questions for pre-existing knowledge/skill base, ability to learn and integrate new information, and enablement of knowledge sharing. Pre-existing knowledge/skill base was also included in the survey by identifying training level and tenure in the particular program as well as the organization. This was explored further though the interview when assessing overlapping skills-focused questions (see Adopter characteristics section). Ability to learn and integrate new information was assessed in the interview by asking about the provider’s learning experience and experience of use of the innovation and was felt to be adequately captured by interview questions regarding knowledge-seeking. Enablement of knowledge sharing was included in the survey and directly assessed communication patterns and exchange of knowledge.

Greenhalgh et al.’s construct of receptive context for change was judged to be somewhat similar to organizational readiness to change and organizational culture and climate. There are at least 43 organizational readiness for change measures, many of which have poor psychometric properties[80]. Although we considered a number of instruments[8183], the one that most influenced the construction of our survey was the widely-used Texas Christian University Organizational Readiness for Change[12]. It has demonstrated good item agreement and strong overall reliability.

Similarly, although several tools exist for assessing culture and climate[8486], most do not adequately capture Greenhalgh’s constructs, and so we developed new items to measure a number of these constructs. We reviewed the Organizational Social Context survey[87], but most of these items were also not representative of Greenhalgh’s constructs. Similarly, we reviewed the Organizational Culture Profile[88]. Although various items shared some commonality with Greenhalgh’s constructs (e.g., ‘being innovative’), we found most items to be relatively unspecific (e.g., ‘fitting in’).

We reviewed several questionnaires that specifically measured organizational leadership. One psychometrically-sound measure, the Multifactor Leadership Questionnaire[89, 90] informed our survey item construction. Leadership items examined support for a new initiative from a variety of levels including general mental health and program leaders. We devised an item in order to capture the presence and use of leadership vision.

More specifically, items from the Texas Christian University Organizational Readiness for Change[12] informed our survey items for managerial relations and risk-taking climate. There are no measures of clear goals and priorities and high-quality data capture. We constructed our own items to represent these constructs.

Similarly, no tools were available to capture system readiness for innovation. Many of these constructs are not easily assessed in simple survey items and were therefore included in the interview. System readiness for innovation includes tension for change, innovation-system fit, power balances (support versus advocacy), assessment of implications, dedicated time and resources (e.g., funding, time), and monitoring and feedback.

We were only able to locate one relevant measure of tension for change[91], a rating system developed through interviews with organizational experts to identify factors that influence health system change. Unfortunately, the authors did not provide the specific items utilized, and thus we captured the tension for change in the interview by asking providers about their existing work climate and the perceived need for new treatments. The constructs of innovation-system fit, power balances, assessment of implications, dedicated time and resources, and monitoring and feedback also did not have standardized measures and thus we devised our own questions.

Outer context

Outer context constructs include socio-political climate, incentives and mandates, interorganizational norm-setting and networks, and environmental stability. There are no standard tools to assess these domains. There are limited measures of sociopolitical climate[8]. We devised questions for the interview regarding environmental ‘pressure to adopt’ to tap into this construct.

Because there were no identified existing measures for incentives and mandates, secondary data sources were used, such as a review of national mandates in provider handbooks from VA Central Office and discussions with one of the co-authors (JR), who is in charge of one of the national evidence-based roll-outs. Likewise for interorganizational norm setting and networks, the team devised items to assess these constructs because no reliable existing measures were available. Environmental stability was derived from interview questions asking if staffing changes had occurred and perceived reasons for changes (e.g., moves, policy changes). This construct clearly overlaps with inner context (e.g., funding clearly translates into resources that are available within the inner context); however, environmental stability is assumed to be affected by external influences. Thus, our group devised survey items and interview questions and used administrative data to represent outer context constructs. While organizational written policies and procedures are likely accessible to most researchers, changes in budgets and funding may not be, particularly for researchers studying implementation from outside an organization. When possible, this type of information should be sought to support the understanding of outer context.

Implementation process

Factors proposed to represent the process of implementation include decision-making, hands-on approach by leader, human and dedicated resources, internal communication, external collaboration, reinvention/development, and feedback on progress. Consistent with Greenhalgh et al.’s two-phase approach, we primarily captured the implementation process through the interview.

Decision-making was assessed through questions regarding decentralization described above. Because there are no established measures to assess hands-on approach by leader, or human resources issues and dedicated resources, these were developed by group consensus. For internal communication, we asked a question in the interview about whether a provider sought consultation from someone inside their setting regarding the innovation and its implementation. For external collaboration, we also asked a specific question regarding outside formal consultation. We captured the construct of reinvention/development with an interview question concerning how the two innovations are used and whether they had been modified (e.g., number and format of sessions). Because no formal measure for feedback existed, we utilized interview questions from monitoring feedback to capture both constructs. Even though Greenhalgh et al. outline a separate set of constructs for implementation process, these seem to overlap with the other five constructs.

Discussion

Greenhalgh et al.[1] developed a largely evidence-based comprehensive model of diffusion, dissemination, and implementation that can assist in guiding implementation research as well as efforts to facilitate implementation. Despite numerous strengths of the model, there had been no explicit recommendations for operational definitions or measurement for most of the six identified constructs. Through a systematic literature review of measures for associated constructs and an iterative process of team consensus, our group has taken a first step at operationalizing, and thus testing this model.

We are presently using a mixed-method approach of measurement using quantitative data through survey and administrative data and qualitative data through semi-structured interviews and other artifacts (e.g., review of policies) to examine the implementation of two evidence-based psychotherapies for PTSD nationally within VA[9]. Information from that study should provide knowledge to assist in the refinement of the measures, such as examination of psychometric properties and identifying changes needed to better operationalize the constructs. It will be essential, of course, to test the Greenhalgh et al. model through the use of our mixed-method approach and resulting survey, interview, and administrative data in additional healthcare organizations and settings and with non-mental health interventions. Given the challenge to operationalize such a saturated model, this work should be considered a first step in the advancement of a testable theory. A contextual approach should be taken to strategically determine which constructs are most applicable to the individual study or evaluation. Also, a more in-depth examination of several constructs may be a needed next step.

Limitations

Some variables potentially important in the process of implementation are not addressed in the Greenhalgh model. For example, there are several adopter characteristics and social cognition constructs that are not included (e.g., intention for behavior change, self-efficacy, memory)[9294]. Further, in times of increasing fiscal constraint, it is important to note that the model does not consider cost of the innovation itself or costs associated with its implementation, including investment, supply, and opportunity costs (as opposed to available resources from the inner setting)[7].

Other constructs receive mention in the model but likely warrant further refinement and elaboration. For example, while several constructs are similar to organizational culture and climate, concurrent use of other measurement tools may be warranted e.g.,[8487]. Similarly, the concept of leadership received only minimal attention in the Greenhalgh model, even though mental health researchers[10] have found this construct to be influential in implementation. Because the validity of the transtheoretical stages of change model has been questioned[95], alternatives may be needed to capture this important construct.

Other constructs are complicated by overlap (e.g., cosmopolitan, social networks, and opinion leaders) or are similarly applied to more than one domain. One example is feedback on progress, which is listed under the domain implementation process, but the very similar construct monitoring and feedback is listed under the domain system readiness for innovation. Likewise, social networks are captured under both adopter and communication and influence domains. Our measurement process attempted to streamline questioning (both in the survey and interview) by crafting questions to account for redundancy in constructs (e.g., reinvention).

We also chose not to include every construct and sub-construct in the model because their assessment would be burdensome for providers.1 In addition, some of these constructs were viewed as best captured in a larger meta-narrative[8] (e.g., assimilation and linkage), mapping the storyline and the interplay of contextual or contradictory information. Like most measures based on participant responses, our survey and interview may be influenced by intentional false reporting, inattentive responding or memory limitations, or participant fatigue.

It is possible that our search terms may not have identified all the relevant measures. For example, there are several other search terms that may have captured the ‘implementation’ domain, such as uptake, adoption, and knowledge transfer. In addition, searching for the specific construct labels from this model assumes that there is consensus in the research community about the meaning of these terms and that no other terms are ever used to label these constructs.

Of course, operationalizing constructs is only one aspect of making a model testable. It also requires information about construct validity, a clear statement of the proposed relationships between elements in the model that would inform an analysis strategy, and a transparent articulation about the generalizability of the model and which contexts or factors might limit its applicability.

In sum, our work here represents a significant step toward measuring Greenhalgh et al.’ comprehensive and evidence-based model of implementation. This conceptual and measurement development now provides for a more explicit, transparent, and testable theory. Despite limitations, the survey and interview measures as well as our use of administrative data described here can enhance research on implementation by providing investigators with a broad measurement tool that includes, in a single questionnaire and interview, most of the many factors affecting implementation that are included in the Greenhalgh model and other overarching theoretical formulations. One important next step will be to evaluate the psychometrics of this measure across various healthcare contexts and innovations and to examine whether the definitional/measurement boundaries are reliable and valid, and further refine our measure. Empirical grounding of the process of implementation remains a work in progress.

Endnote

1See Figure1. Terms not included in our operationalized survey by construct and sub-construct: System Antecedents for Innovation: Absorptive capacity for new knowledge: ability to find, interpret, recodify and integrate new knowledge; Linkage: Design stage: Shared meanings and mission, effective knowledge transfer, user involvement in specification, capture of user led innovation; Linkage: Implementation stage: Communication and information, project management support; Assimilation: Complex, nonlinear process, ‘soft periphery’ elements.