Introduction

Partial least squares (PLS) is a composite-based approach to structural equation modeling (SEM) that allows estimating complex interrelationships between constructs and their indicator variables (Hair et al. 2017; Lohmöller 1989; Wold 1982). PLS has gained much prominence in marketing applications of SEM, as evidenced in various reviews across different subfields (e.g., Guenther et al. 2023; Sarstedt et al. 2022, 2024; Wang et al. 2023). In recent years, researchers have introduced various extensions that expand on the original PLS–SEM algorithm and statistics (Hair et al. 2022, 2024). One such extension is the importance–performance map analysis (IPMA) that interprets the composite scores that the PLS–SEM algorithm generates as indicative of construct performances (Ringle and Sarstedt 2016; Streukens et al. 2017). The core of the IPMA is a two-dimensional map that contrasts these performance scores with the constructs’ total effects (i.e., the importance) on a specific target construct. The IPMA has been used in a variety of contexts, including research on customer loyalty (Damberg et al. 2022), sustainable consumption (Saari et al. 2021), and technology adoption (Mkedder and Özata 2024).

A potential limitation in the application of the standard IPMA is that it is restricted to a sufficiency logic. According to this logic, combinations of antecedent constructs are sufficient for impacting the target construct and each construct’s influence can, in principle, be compensated for by the others. This logic differs from the necessity perspective that has recently experienced more coverage in the marketing literature through the introduction of the necessary condition analysis (NCA; Dul 2016; 2020; Dul et al. 2021). The NCA identifies necessary conditions by establishing whether a specific condition must be present so that an outcome can exist. In other words, it establishes whether the absence of a specific condition prevents the outcome from existing. In the case of a necessary condition, the analysis can also quantify the level of an antecedent variable that must be achieved so that a specific outcome level in the target becomes possible. Originally proposed in a standard regression context, Richter and Hauff (2022) and Richter et al. (2023b, 2020) suggested using PLS–SEM-based composite scores as input for the NCA. Several authors have used this approach to introduce a necessity perspective into their PLS–SEM analyses (e.g., Sukhov et al. 2022; Tan et al. 2024; Tiwari et al. 2024).

Hauff et al. (2024) have recently merged these perspectives into a unifying analysis framework called combined IPMA (cIPMA). Their cIPMA introduces the results from the NCA as an additional dimension in an importance–performance map—see Riggs et al. (2024) for an initial application. Figure 1 shows a sample map from a cIPMA analysis. This hypothetical example considers three antecedent constructs with different total effects on the target construct (i.e., importance, shown on the x-axis) and the average construct values (i.e., performance, shown on the y-axis). The map also distinguishes between constructs with high versus low necessity effect sizes. Constructs that are not necessary for achieving the target construct’s desired level are shown as black circles (Y1 in Fig. 1), while the necessary constructs are displayed as white circles (Y2 and Y3 in Fig. 1). The size of the white circles indicates the percentage of observations whose case values are below those required for achieving a specific value in the target construct. Researchers have to specify this target value a priori, based on theoretical considerations or managerial requirements. In this example, the target value is set to 80. The larger the white circle, the larger the percentage of cases that have not achieved the necessary condition’s required level. Consequently, large white circles indicate that, from a necessity perspective, researchers should focus their attention on this aspect.

Fig. 1
figure 1

Combined importance–performance map analysis (cIPMA) example

Running a cIPMA requires some data management effort as researchers need to combine elements from different analysis steps. Addressing this concern, this tutorial article illustrates the main steps of a cIPMA using SmartPLS 4 (Ringle et al. 2024), currently the most prominent software for conducting PLS–SEM analyses (e.g., Cheah et al. 2023b; Sarstedt and Cheah 2019). Our illustrations draw on the same model and dataset as in Hauff et al. (2024) to facilitate the method’s implementation and interpretation of results.

Case study illustration using SmartPLS 4

Hauff et al. (2024) outline an eight-step procedure for systematically applying the cIPMA (Fig. 2). Since this tutorial article endeavors to explain how to initiate the analyses and extract the relevant information from the output by using SmartPLS 4 software (Ringle et al. 2024), we focus on Steps 5 and 6, but also comment on the other elements of the analysis.

Fig. 2
figure 2

A systematic procedure for running the cIPMA

The authors illustrate the cIPMA’s application by using an extended version of Davis’s (1989) technology acceptance model (TAM; Fig. 3), which has served as a blueprint for researching consumer behavior in various contexts. The dataset used in the illustration draws on N = 174 responses from French consumers. Richter et al.’s (2023a) article introduces the dataset in detail.

Fig. 3
figure 3

Extended technology acceptance model (TAM)

The model and the dataset are included in SmartPLS 4 as a sample project, which we can install in the software with a mouse click. Do so by going to the Project window, click on Regression/PROCESS under Sample projects and thereafter select NCA (extended TAM) from the drop-down menu (Fig. 4). SmartPLS will include a new sample project in the Workspace menu on the right of the window (Fig. 5). Note that this project already includes the final NCA model and the dataset derived from the IPMA analysis. However, to demonstrate the analysis steps, we start by analyzing the PLS path model; do so by double-clicking on PLS–SEM for extended TAM (Fig. 5).

Fig. 4
figure 4

SmartPLS Project window

Fig. 5
figure 5

Workspace

SmartPLS then opens the Modeling window with the TAM readily specified (Fig. 6). Following the procedure that Hauff et al. (2024) outlined, the next step would be to run the standard PLS–SEM algorithm (i.e., by selecting Standardized for the Type of results option in the PLS–SEM algorithm’s start dialog; Step 3 in Fig. 2). Assess the measurement models’ reliability and validity in respect of these outcomes (Step 4 in Fig. 2). As part of this analysis, we also need to check whether all the indicator weights are positive. Here, we do not present the detailed analysis, which follows the well-known standards in PLS–SEM, but refer the reader to Richter et al. (2020) and to Hauff et al.’s (2024) Table A2 (in their “Appendix”).

Fig. 6
figure 6

Modeling window

We continue our illustration by running the IPMA (Step 5 in Fig. 2). To do so, we click on Calculate in the menu bar and select the option Importance–performance map analysis (IPMA) (Fig. 6). In the menu that opens, we choose Technology Use as the target construct, and All predecessors of the selected target construct under the IPMA results (Fig. 7, left tab). The lower part of the dialog box shows the indicators’ observed minimum and maximum values and the theoretical minimum and maximum values (Scale min and Scale max), which the software derives from the data structure. We see that the theoretical values in this illustration correspond to those considered in the original survey (i.e., the complete theoretical scales were used by respondents). If this were not the case, the estimated average performance values of the constructs would be biased along the empirical range of the indicators. In this case, PLS–SEM researchers advise to manually adjust the theoretical values in the Data window (i.e., by correcting the Scale min and Scale max in SmartPLS where necessary), which we could access by double-clicking on the dataset in the Workspace (Fig. 5). This is followed by clicking on Setup in the menu bar, where we can ultimately implement the desired changes and Update the file. To continue with the IPMA, we click on the PLS setup tab and select the settings shown in Fig. 7 (right tab), before clicking on Start calculation.

Fig. 7
figure 7

IPMA dialog box

Next, the SmartPLS software shows the estimates in the Results window. Figure 8 shows the graphical output of the results report. The numbers in the constructs are the average performance scores (i.e., the average rescaled constructs scores, which range from 0 to 100). For example, while Compatibility has a performance score of 61.557, Ease of use achieves a considerably higher performance score of 75.640. The numbers on the arrows represent the direct effects between the constructs. To extract the total effects that the antecedent constructs have on the final target construct (Technology use), click on Final results → Total effects. Figure 9 shows that Adoption intention has the strongest total effect, followed by Emotional value, Usefulness, and Compatibility.

Fig. 8
figure 8

IPMA results

Fig. 9
figure 9

Total effects

SmartPLS can display the standard importance–performance map (Fig. 10), which we can access by clicking on Quality criteria → Importance–performance map. However, the software currently (version 4.1.0.3) does not include a feature for creating a combined importance–performance map. To create such a combined map, we need to save these importance and performance scores as input for the cIPMA. For example, researchers could copy and paste the results on an Excel spreadsheet similar to the one which we provide as a cIPMA example on the following webpage: https://www.pls-sem.net/downloads/additional-useful-downloads/.

Fig. 10
figure 10

Importance–performance map

Having extracted the importance (i.e., total effects) and performance scores, we need to export the rescaled latent variable scores into a separate dataset for processing in the NCA. Do so by clicking on Create data file in the menu bar (Fig. 8). In the dialog box that opens (Fig. 11), we specify a file name (e.g., Latent variable scores for the NCA), check the box next to Rescaled latent variable scores, and confirm by clicking on Create. SmartPLS will now generate a new dataset under the project. Next, we click on Edit followed by Back to return to the Project window (Fig. 12). The new dataset called Latent variable scores for the NCA is now shown under the PLS–SEM for extended TAM project.

Fig. 11
figure 11

Create data file dialog box

Fig. 12
figure 12

SmartPLS project window with new dataset

We next initiate the NCA by using the previously extracted latent variable scores as input (Step 6 in Fig. 2). We do so by clicking on Regression in the menu bar. In the window that opens (Fig. 13), we have to specify the project to which the model should be assigned (here, Example—NCA (extended TAM), the model type (here, REGRESSION), and the model’s name (e.g., cIPMA). Next, we click on Save (Fig. 13).

Fig. 13
figure 13

Regression dialog box

In the window that opens, we first need to select the newly created Latent variable scores for the NCA dataset by clicking on the symbol above the variable list (Fig. 14). Then, we drag and drop the dependent variable (LV scores—Technology use) on the modeling window. Next, we need to drag and drop the independent variables (LV scores—Adoption intention, LV scores—Compatibility, LV scores—Ease of Use, LV scores—Emotional value, LV scores—Usefulness) on the box labeled LV scores—Technology use in the modeling window. Figure 15 shows the final modeling window. We can now run the analysis by clicking on Calculate → Necessary condition analysis (NCA). In the dialog box that opens (Fig. 16), we choose 20 as the Number of steps for bottleneck table option as we are interested in identifying the necessary levels of the independent variables for a rescaled score of Technology use of 85 (which would not be shown, if we just selected the default 10 steps). Then, we click on Start calculation.

Fig. 14
figure 14

Regression modeling window

Fig. 15
figure 15

Regression modeling window with model

Fig. 16
figure 16

NCA setup

SmartPLS now opens the results report that documents the metrics that are relevant for the NCA. Specifically, under Final results → Ceiling line effect size overview (Fig. 17), we can request the effect size d. We focus on the effect size for the CE-FDH ceiling line, which is the relevant line for our data (see Hauff et al. 2024). The results show that Emotional value has the strongest necessary effect size (0.331), followed by Adoption intention (0.294), and Usefulness (0.243). We need to substantiate these effect sizes’ significances by running a permutation analysis. However, we will first complete the illustration of the NCA results’ output that is useful for the interpretation of findings, before running the NCA permutation in SmartPLS (as we know which variables show significant necessity effect sizes from our previous studies). For our analyses, we would first identify the significance of effect sizes and may then need to go back to these outputs to not make interpretations on not significant necessity effects.

Fig. 17
figure 17

NCA output (I)

For the cIPMA, we identify the percentage of cases that do not achieve the antecedent constructs’ required level to generate a specific level of Technology use. To request the corresponding table, go to Final results → Bottleneck tables—CE-FDH → Percentiles (Fig. 18). Hauff et al. (2024) assume a desired Technology use level of 85. Assuming this level, our results show that 39.080% of all cases did not achieve the necessary level of Adoption intention to enable such a Technology use score (see highlighted row in Fig. 18). Compared to Adoption intention, the percentage of cases that did not achieve the necessary level of Compatibility is considerably lower (8.621%).

Fig. 18
figure 18

NCA output (II)

In the next step (Step 7 in Fig. 2), we need to evaluate the structural model in terms of PLS–SEM and NCA results. For the former, we refer the reader to Richter et al. (2023a) and move directly to testing whether the necessity effect size d is significant. We do so by returning to the Modeling window by clicking on Edit in the menu bar. Next, we go to Calculate and NCA permutation. In the dialog box that opens (Fig. 19), we retain the default settings (5000 permutations, parallel processing, a significance level of 0.05, and a fixed seed) and click on Start calculation. In the Results window that opens, we go to Final results → Ceiling line effect size overview → CE-FDH. We find that all necessity effect sizes are significantly larger than zero, since the estimates lie above the 95% percentile. For example, the necessity effect size of Adoption intention is 0.294, which is higher than the 95% percentile of 0.180 (Fig. 20). These results are further supported by the p values, which are all lower than 0.05.

Fig. 19
figure 19

NCA permutation dialog box

Fig. 20
figure 20

NCA permutation results

Table 1 summarizes the results of the IPMA and the NCA. Specifically, the table shows the importance of constructs for Technology use and average performance scores from PLS–SEM. In addition, it shows the percentage of cases that do not meet the necessity condition (i.e., those cases that remain below the necessary level of 85 for Technology use), and the necessity effect size d, including the p value for each antecedent construct. In terms of the necessity conditions, we find that all antecedent constructs are indeed necessary, as their effect sizes are medium (i.e., 0.1 ≤ d < 0.3) and significant (p < 0.05).

Table 1 cIPMA results

We can now use the results from Table 1 to generate the combined importance–performance map with (1) the importance scores on the x-axis, (2) the performance scores on the y-axis, (3) the circle type indicating whether the antecedent construct is necessary (white = yes, black = no), and (4) the size of the white circles indicating the percentage of cases that do not achieve the required levels. To do so, we may use the Excel template, which we can access at https://www.pls-sem.net/downloads/additional-useful-downloads/. In our case, all five conditions are necessary, so we always use the percentage of cases that do not meet the required level as the input for the size of the white circle. If a condition is not necessary, the size of the black circle is standardized to 1.

Entering the values from Table 1 generates the combined importance–performance map shown in Fig. 21.Footnote 1 In line with Hauff et al. (2024), the results suggest that Adoption intention is highly important and already shows a high performance. The NCA adds to this basic IPMA result, since its results identify several necessary conditions for Technology use. Specifically, 39% of the cases did not achieve Adoption intention’s required level to produce the desired performance level of 85 for Technology use. Despite their relatively low importance obtained by PLS–SEM, Usefulness and Ease of use warrant attention, as many cases do not achieve the required levels (47% for Usefulness, 29% for Ease of use). Failure to do so would hinder Technology use’s improvement to the desired level. Finally, while Emotional value and Compatibility are also necessary, only few cases fail to achieve the required levels (namely 6%, and 9%). Consequently, these constructs should receive less priority.

Fig. 21
figure 21

Combined importance–performance map of the TAM

Observations and conclusions

During the last decade, research on PLS–SEM has made considerable progress regarding advancing the method’s capabilities—see, for example, Cheah et al. (2023a), Richter and Tudoran (2024), and Sarstedt and Liu (2024). One such extension is Hauff et al.’s (2024) cIPMA, which combines results from an IPMA and NCA to form a joint map that allows managerial actions seeking to improve a certain target construct. The IPMA may, for example, identify certain antecedent constructs of relatively minor importance and performance, while they are simultaneously necessary to realize a desired value of the target construct. Similarly, in the same context, a construct may be important, but not necessary. The cIPMA allows its users to identify such relationships and dependencies. To facilitate its adoption, this article demonstrates the implementation of the cIPMA by means of the SmartPLS 4 software, which features prominently in marketing research and beyond (e.g., Cheah et al. 2023b; Richter et al. 2022; Sarstedt and Cheah 2019). Our step-by-step illustration of how to run the cIPMA in SmartPLS helps researchers and practitioners to introduce a necessity perspective in their IPMA.

While the cIPMA offers a valuable way of combining sufficiency and necessity perspectives, future research should extend its scope further. For example, researchers may investigate routines that test the associations between the constructs involved in PLS–SEM when a specific bottleneck identified in the NCA is bypassed and their implications for the interpretation of should-have factors. Also, researchers may engage in discussing and evaluating if and how indirect or mediation effects could be integrated into the NCA and therewith cIPMA. Likewise, advancements such as in Streukens et al. (2017) who have extended the standard IPMA to accommodate nonlinear effects whose specification and estimation has become more prominent (Basco et al. 2021) may be integrated in a cIPMA context. Researchers may also engage further in the discussion of relevant aspects related to the philosophies (Dul 2024a) and core research design elements when triangulating routines and methods (e.g., related to sampling and sample size, see for instance, Dul 2024b).

The cIPMA also provides relevant input to further conceptualize on the importance–performance management toolset itself. Sever (2015), for instance, outlined that further conceptualization is needed with regards to the definition of the term importance, the definition of thresholds to demark the cut-off between high and low performance, and the differentiation of attributes positioned in the same quadrant and close to thresholds. The cIPMA offers input to all these areas of concern. The integration of the necessity logic into the IPMA does not only offer relevant new input to the definition of importance but can also aid the definition of relevant threshold levels and guide the interpretation of constructs positioned within quadrants of the map. Researchers engaged in the development of the managerial toolset are invited to combine and test our approach in combination with or contrast to previous developments (such as sensitivity analyses, iso-rating lines and further).

Finally, there is room for researchers to address the interpretation of findings when assumptions that underly our cIPMA approach are not met. This relates, for instance, to research designs in which constructs use indicators with different scales (e.g., a construct using indicators measured on a scale from 1 to 5 and 1 to 7). While all of the above are relevant areas to further advance the cIPMA and its related toolsets, we are confident that the method offers a valuable means to advance both managerial decision making and academic research.