Handbook of Market Research pp 140  Cite as
Partial Least Squares Structural Equation Modeling
 101 Citations
 3 Mentions
 2.4k Downloads
Abstract
Partial least squares structural equation modeling (PLSSEM) has become a popular method for estimating (complex) path models with latent variables and their relationships. Building on an introduction of the fundamentals of measurement and structural theory, this chapter explains how to specify and estimate path models using PLSSEM. Complementing the introduction of the PLSSEM method and the description of how to evaluate analysis results, the chapter also offers an overview of complementary analytical techniques. An application of the PLSSEM method to a wellknown corporate reputation model using the SmartPLS 3 software illustrates the concepts.
Keywords
Partial least squares PLS PLS path modeling PLSSEM SEM Variancebased structural equation modelingIntroduction
In the 1970s and 1980s the Swedish econometrician Herman O. A. Wold (1975, 1982, 1985) “vigorously pursued the creation and construction of models and methods for the social sciences, where ‘soft models and soft data’ were the rule rather than the exception, and where approaches strongly oriented at prediction would be of great value” (Dijkstra 2010, p. 24). One procedure that emerged from Wold’s efforts was partial least squares path modeling , which later evolved to partial least squares structural equation modeling (PLSSEM ; Hair et al. 2011). PLSSEM estimates the parameters of a set of equations in a structural equation model by combining principal components analysis with regressionbased path analysis (MateosAparicio 2011). Wold (1982) proposed his “soft model basic design” underlying PLSSEM as an alternative to Jöreskog’s (1973) factorbased SEM or covariancebased SEM , which has been labeled as hard modeling because of its numerous and rather restrictive assumptions for establishing a structural equation model but also in terms of data distribution and sample size. Importantly, “it is not the concepts nor the models nor the estimation techniques which are ‘soft’, only the distributional assumptions” (Lohmöller 1989, p. 64).
PLSSEM enjoys widespread popularity in a broad range of disciplines including accounting (Lee et al. 2011; Nitzl 2016), group and organization management (Sosik et al. 2009), hospitality management (Ali et al. 2017), international management (Richter et al. 2016a), operations management (Peng and Lai 2012), management information systems (Hair et al. 2017a; Ringle et al. 2012), marketing (Hair et al. 2012b), strategic management (Hair et al. 2012a), supply chain management (Kaufmann and Gaeckler 2015), and tourism (do Valle and Assaker 2016). Contributions in terms of books, edited volumes, and journal articles applying PLSSEM or proposing methodological extensions are appearing at a rapid pace (e.g., Latan and Noonan 2017; Esposito Vinzi et al. 2010; Hair et al. 2017b, 2018; Garson 2016; Ramayah et al. 2016). A main reason for PLSSEM’s attractiveness is that the method allows researchers to estimate very complex models with many constructs and indicator variables, especially when prediction is the goal of the analysis. Furthermore, PLSSEM generally allows for much flexibility in terms of data requirements and the specification of relationships between constructs and indicator variables. Another reason is the accessibility of easy to use software with graphical user interface such as ADANCO, PLSGraph, SmartPLS, WarpPLS, and XLSTAT. Packages for statistical computing software environments such as R complement the set of programs (e.g., semPLS).
The objective of this chapter is to explain the fundamentals of PLSSEM. Building on Sarstedt et al. (2016b), this chapter first provides an introduction of measurement and structural theory as a basis for presenting the PLSSEM method. Next, the chapter discusses the evaluation of results, provides an overview of complementary analytical techniques, and concludes by describing an application of the PLSSEM method to a wellknown corporate reputation model, using the SmartPLS 3 software.
Principles of Structural Equation Modeling
Path Models with Latent Variables
Constructs , also referred to as latent variables , are elements in statistical models that represent conceptual variables that researchers define in their theoretical models. Constructs are visualized as circles or ovals (Y _{1} to Y _{3}) in path models, linked via singleheaded arrows that represent predictive relationships. The indicators , often also named manifest variables or items, are directly measured or observed variables that represent the raw data (e.g., respondents’ answers to a questionnaire). They are represented as rectangles (x _{ 1 } to x _{9}) in path models and are linked to their corresponding constructs through arrows.
A path model consists of two elements. The structural model represents the structural paths between the constructs, whereas the measurement models represent the relationships between each construct and its associated indicators. In PLSSEM, structural and measurement models are also referred to as inner and outer models. To develop path models, researchers need to draw on structural theory and measurement theory, which specify the relationships between the elements of a path model.
Structural Theory
Structural theory indicates the latent variables to be considered in the analysis of a certain phenomenon and their relationships. The location and sequence of the constructs are based on theory and the researcher’s experience and accumulated knowledge (Falk and Miller 1992). When researchers develop path models, the sequence is typically from left to right. The latent variables on the left side of the path model are independent variables, and any latent variable on the right side is the dependent variable (Fig. 1). However, latent variables may also serve as both an independent and dependent variable in the model (Haenlein and Kaplan 2004).
When a latent variable only serves as an independent variable, it is called an exogenous latent variable (Y _{1} in Fig. 1). When a latent variable only serves as a dependent variable (Y _{3} in Fig. 1), or as both an independent and a dependent variable (Y _{2} in Fig. 1), it is called an endogenous latent variable . Endogenous latent variables always have error terms associated with them. In Fig. 1, the endogenous latent variables Y _{2} and Y _{3} have one error term each (z _{2} and z _{3}), which reflect the sources of variance not captured by the respective antecedent construct(s) in the structural model. The exogenous latent variable Y _{1} also has an error term (z _{ 1 }) but in PLSSEM, this error term is constrained to zero because of the way the method treats the (formative) measurement model of this particular construct (Diamantopoulos 2011). Therefore, this error term is typically omitted in the display of a PLS path model. In case an exogenous latent variable draws on a reflective measurement model, there is no error term attached to this particular construct.
The strength of the relationships between latent variables is represented by path coefficients (i.e., b _{ 1 }, b _{ 2 }, and b _{ 3 }), and the coefficients are the result of regressions of each endogenous latent variable on their direct predecessor constructs. For example, b _{ 1 } and b _{ 3 } result from the regression of Y _{ 3 } on Y _{ 1 } and Y _{ 2 }.
Measurement Theory
The measurement theory specifies how to measure latent variables. Researchers can generally choose between two different types of measurement models (Diamantopoulos and Winklhofer 2001; Coltman et al. 2008): reflective measurement models and formative measurement models .
In contrast, in a formative measurement model , a linear combination of a set of indicators forms the construct (i.e., the relationship is from the indicators to the construct). Hence, “variation in the indicators precedes variation in the latent variable” (Borsboom et al. 2003, p. 2008). Indicators of formatively measured constructs do not necessarily have to correlate strongly as is the case with reflective indicators. Note that strong indicator correlations can also occur in formative measurement models and do not necessarily imply that the measurement model is reflective in nature (Nitzl and Chin 2017).
According to Henseler (2017, p. 180), measurement models with composite indicators “are a prescription of how the ingredients should be arranged to form a new entity,” which he refers to as artifacts . That is, composite indicators define the construct’s empirical meaning. Henseler (2017) identifies Aaker’s (1991) conceptualization of brand equity as a typical conceptual variable with composite indicators (i.e., an artifact) in advertising research, comprising the elements brand awareness, brand associations, brand quality, brand loyalty, and other proprietary assets. The use of artifacts is especially prevalent in the analysis of secondary and archival data, which typically lack a comprehensive substantiation on the grounds of measurement theory (Rigdon 2013a; Houston 2004). For example, a researcher may use secondary data to form an index of a company’s communication activities, covering aspects such as online advertising, sponsoring, or product placement (Sarstedt and Mooi 2014). Alternatively, composite indicator models can be thought of as a means to capture the essence of a conceptual variable by means of a limited number of indicators (Dijkstra and Henseler 2011). For example, a researcher may be interested in measuring a company’s corporate social responsibility using a set of five (composite) indicators that capture salient features relevant to the particular study. More recent research contends that composite indicators can be used to measure any concept including attitudes, perceptions, and behavioral intentions (Nitzl and Chin 2017). However, composite indicators are not a free ride for careless measurement. Instead, “as with any type of measurement conceptualization, however, researchers need to offer a clear construct definition and specify items that closely match this definition – that is, they must share conceptual unity” (Sarstedt et al. 2016b, p. 4002). Composite indicator models rather view construct measurement as approximation of conceptual variables, acknowledging the practical problems that come with measuring unobservable conceptual variables that populate theoretical models. Whether researchers deem an approximation as sufficient depends on their philosophy of science. From an empirical realist perspective, researchers want to see results from causal and composite indicators converging upon the conceptual variable, which they assume exists independent of observation and transcends data. An empiricist defines a certain concept in terms of data, a function of observed variables, leaving the relationship between conceptual variable and construct untapped (Rigdon et al. 2017).
Path Model Estimation with PLSSEM
Background
Different from factorbased SEM, PLSSEM explicitly calculates case values for the latent variables as part of the algorithm. For this purpose, the “unobservable variables are estimated as exact linear combinations of their empirical indicators” (Fornell and Bookstein 1982, p. 441) such that the resulting composites capture most of the variance of the exogenous constructs’ indicators that is useful for predicting the endogenous constructs’ indicators (e.g., McDonald 1996). PLSSEM uses these composites to represent the constructs in a PLS path model, considering them as approximations of the conceptual variables under consideration (e.g., Henseler et al. 2016a; Rigdon 2012).
Since PLSSEMbased model estimation always relies on composites, regardless of the measurement model specification, the method can process reflectively and formatively specified measurement models without identification issues (Hair et al. 2011). Identification of PLS path models only requires that each construct is linked with a significant path to the nomological net of constructs (Henseler et al. 2016a). This characteristic also applies to model settings in which endogenous constructs are specified formatively as PLSSEM relies on a multistage estimation process, which separates measurement from structural model estimation (Rigdon et al. 2014).
Three aspects are important for understanding the interplay between data, measurement, and model estimation in PLSSEM. First, PLSSEM handles all indicators of formative measurement models as composite indicators. Hence, a formatively specified construct in PLSSEM does not have an error term as is the case with causal indicators in factorbased SEM (Diamantopoulos 2011).
Second, when the data stem from a common factor model population (i.e., the indicator covariances define the data’s nature), PLSSEM’s parameter estimates deviate from the prespecified values. This characteristic, also known as PLSSEM bias , entails that the method overestimates the measurement model parameters and underestimates the structural model parameters (e.g., Chin et al. 2003). The degree of over and underestimation decreases when both the number of indicators per construct and sample size increase (consistency at large ; Hui and Wold 1982). However, some recent work on PLSSEM urges researchers to avoid the term PLSSEM bias since the characteristic is based on specific assumptions about the nature of the data that do not necessarily have to hold (e.g., Rigdon 2016). Specifically, when the data stem from a composite model population in which linear combinations of the indicators define the data’s nature, PLSSEM estimates are unbiased and consistent (Sarstedt et al. 2016b). Apart from that, research has shown that the bias produced by PLSSEM when estimating data from common factor model populations is low in absolute terms (e.g., Reinartz et al. 2009), particularly when compared to the bias that common factorbased SEM produces when estimating data from composite model populations (Sarstedt et al. 2016b). “Clearly, PLS is optimal for estimating composite models while simultaneously allowing approximating common factor models with effect indicators with practically no limitations” (Sarstedt et al. 2016b, p. 4008).
Third, PLSSEM’s use of composites not only has implications for the method’s philosophy of measurement but also for its area of application. In PLSSEM, once the weights are derived, the method always produces a single specific (i.e., determinate) score for each case per composite. Using these scores as input, PLSSEM applies a series of ordinary least squares regressions, which estimate the model parameters such that they maximize the endogenous constructs’ explained variance (i.e., their R ^{ 2 } values). Evermann and Tate’s (2016) simulation studies show that PLSSEM outperforms factorbased SEM in terms of prediction. In light of their results, the authors conclude that PLSSEM allows researchers “to work with an explanatory, theorybased model, to aid in theory development, evaluation, and selection.” Similarly, Becker et al.’s (2013a) simulation study provides support for PLSSEM’s superior predictive capabilities.
The PLSSEM Algorithm
The algorithm starts with an initialization stage in which it establishes preliminary latent variable scores. To compute these scores, the algorithm typically uses unit weights (i.e., 1) for all indicators in the measurement models (Hair et al. 2017b).
Stage 1 of the PLSSEM algorithm iteratively determines the inner weights and latent variable scores by means of a fourstep procedure – consistent with the algorithm’s original presentation (Lohmöller 1989), inner weights refer to path coefficients, while outer weights and outer loadings refer to indicator weights and loadings in the measurement models. Step #1 uses the initial latent variable scores from the initialization of the algorithm to determine the inner weights b _{ ji } between the adjacent latent variables Y _{ j } (i.e., the dependent one) and Y _{ i } (i.e., the independent one) in the structural model. The literature suggests three approaches to determine the inner weights (Lohmöller 1989; Chin 1998; Tenenhaus et al. 2005). In the centroid scheme , the inner weights are set to +1 if the covariance between Y _{ j } and Y _{ i } is positive and −1 if this covariance is negative. In case two latent variables are unconnected, the weight is set to 0. In the factor weighting scheme , the inner weight corresponds to the covariance between Y _{ j } and Y _{ i } and is set to zero in case the latent variables are unconnected. Finally, the path weighting scheme takes into account the direction of the inner model relationships (Lohmöller 1989). Chin (1998, p. 309) notes that the path weighting scheme “attempts to produce a component that can both ideally be predicted (as a predictand) and at the same time be a good predictor for subsequent dependent variables.” As a result, the path weighting scheme leads to slightly higher R ^{ 2 } values in the endogenous latent variables compared to the other schemes and should therefore be preferred. In most instances, however, the choice of the inner weighting scheme has very little bearing on the results (Noonan and Wold 1982; Lohmöller 1989).
Step #2, the inside approximation, computes proxies for all latent variables \( {\tilde{Y}}_j \) by using the weighted sum of its adjacent latent variables scores Y _{ i }. Then, for all the indicators in the measurement models, Step #3 computes new outer weights indicating the strength of the relationship between each latent variable \( {\tilde{Y}}_j \) and its corresponding indicators. To do so, the PLSSEM algorithm uses two different estimation modes. When using Mode A (i.e., correlation weights), the bivariate correlation between each indicator and the construct determine the outer weights. In contrast, Mode B (i.e., regression weights) computes indicator weights by regressing each construct on its associated indicators.
By default, estimation of reflectively specified constructs draws on Mode A, whereas PLSSEM uses Mode B for formatively specified constructs. However, Becker et al. (2013a) show that this reflexlike use of Mode A and Mode B is not optimal under all conditions. For example, when constructs are specified formatively, Mode A estimation yields better outofsample prediction when the model estimation draws on more than 100 observations and when the endogenous construct’s R ^{ 2 } value is 0.30 or higher.
Figure 2 shows the formal representation of these two modes, where \( {x}_{k_jn} \) represents the raw data for indicator k (k = 1,…,K) of latent variable j (j = 1,…,J) and observation n (n = 1,…,N), \( {\tilde{Y}}_{jn} \) are the latent variable scores from the inside approximation in Step #2, \( {\tilde{w}}_{k_j} \) are the outer weights from Step #3, d _{ jn } is the error term from a bivariate regression, and \( {e}_{k_jn} \) is the error term from a multiple regression. The updated weights from Step #3 (i.e., \( {\tilde{w}}_{k_j} \)) and the indicators (i.e., \( {x}_{k_jn} \)) are linearly combined to update the latent variables scores (i.e., Y _{ jn }) in Step #4 (outside approximation). Note that the PLSSEM algorithm uses standardized data as input and always standardizes the generated latent variable scores in Step #2 and Step #4. After Step #4, a new iteration starts; the algorithm terminates when the weights obtained from Step #3 change marginally from one iteration to the next (typically 1 × 10^{−7}), or when the maximum number of iterations is achieved (typically 300; Henseler 2010).
Stages 2 and 3 use the final latent variable scores from Stage 1 as input for a series of ordinary least squares regressions. These regressions produce the final outer loadings, outer weights, and path coefficients as well as related elements such as indirect and total effects, R ^{ 2 } values of the endogenous latent variables, and the indicator and latent variable correlations (Lohmöller 1989).
Research has proposed several variations of the original PLSSEM algorithm. Lohmöller’s (1989) extended PLSSEM algorithm, for example, allows assigning more than one latent variable to a block of indicators and imposing orthogonality restrictions among constructs in the structural model. More recently, Becker and Ismail (2016) developed a modified version of the original PLSSEM algorithm that uses sampling (poststratification) weights to correct for sampling error. Their weighted PLSSEM approach considers a weights vector defined by the researcher in order to ensure correspondence between sample and population structure (Sarstedt et al. 2017).
Moreover, Dijkstra and Henseler (2015a, b) introduced the consistent PLS (PLSc) approach, which has also been generalized to nonlinear structural equation models (Dijkstra and SchermellehEngel 2014). PLSc is a modified version of Lohmöller’s (1989) original PLSSEM algorithm that produces model estimates that follow a common factor model approach to measurement – see Dijkstra and Henseler (2015b) for an empirical comparison of PLSc and factorbased SEM. Note that PLSc’s introduction has also been viewed critically, however, because of its focus on common factor models. As indicated by Hair et al. (2017a, p. 443): “It is unclear why researchers would use these alternative approaches to PLSSEM when they could easily apply the much more widely recognized and validated CBSEM [i.e., factorbased SEM] method.” PLSSEM does not produce inconsistent estimates per se but only when used to estimate common factor models, just like factorbased SEM produces inconsistent estimates when used to estimate composite models (Sarstedt et al. 2016b). In addition to the already existing but infrequently used capabilities to estimate unstandardized coefficients (with or without intercept), Dijkstra and Henseler (2015b) contend that PLSSEM and PLSc provide a basis for the implementation of other, more versatile estimators of structural model relationships, such as twostage least squares and seemingly unrelated regression. These extensions would facilitate analysis of path models with circular relationships between the latent variables (so called nonrecursive models ).
Further methodological advances may facilitate, for example, the consideration of endogeneity in the structural model, which occurs when an endogenous latent variable’s error term is correlated with the scores of one or more explanatory variables in a partial regression relationship.
Additional Considerations When Using PLSSEM
Research has witnessed a considerable debate about situations that favor or hinder the use of PLSSEM (e.g., Goodhue et al. 2012; Marcoulides et al. 2012; Marcoulides and Saunders 2006; Rigdon 2014a; Henseler et al. 2014). Complementing our above discussion of the method’s treatment of latent variables and the consequences for measurement model specification and estimation, in the following sections we introduce further aspects that are relevant when considering using PLSSEM, and which have been discussed in the literature (e.g., Hair et al. 2013). Where necessary, we refer to differences to factorbased SEM even though such comparisons should not be made indiscriminately (e.g., Marcoulides and Chin 2013; Rigdon 2016; Rigdon et al. 2017; Hair et al. 2017c).
Distributional Assumptions
Many researchers indicate that they prefer the nonparametric PLSSEM approach because their data’s distribution does not meet the requirements of the parametric factorbased SEM approach (e.g., Hair et al. 2012b; Nitzl 2016; do Valle and Assaker 2016). Methods researchers have long noted, however, that this sole justification for PLSSEM use is inappropriate since maximum likelihood estimation in factorbased SEM is robust against violations of normality (e.g., Chou et al. 1991; Olsson et al. 2000). Furthermore, factorbased SEM literature offers robust procedures for parameter estimation, which work well at smaller sample sizes (Lei and Wu 2012).
Consequently, justifying the use of PLSSEM solely on the grounds of data distribution is not sufficient (Rigdon 2016). Researchers should rather choose the PLSSEM method for more profound reasons (see Richter et al. 2016b) such as the goal of their analysis.
Statistical Power
When using PLSSEM, researchers benefit from the method’s greater statistical power compared to factorbased SEM, even when estimating data generated from a common factor model population. Because of its greater statistical power, the PLSSEM method is more likely to identify an effect as significant when it is indeed significant. However, while several studies have offered evidence for PLSSEM’s increased statistical power (e.g., Reinartz et al. 2009; Goodhue et al. 2012), also visávis other compositebased approaches to SEM (Hair et al. 2017c), prior research has not examined the origins of this feature.
The characteristic of higher statistical power makes PLSSEM particularly suitable for exploratory research settings where theory is less developed and the goal is to reveal substantial (i.e., strong) effects (Chin 2010). As Wold (1980, p. 70) notes, “the arrow scheme is usually tentative since the model construction is an evolutionary process. The empirical content of the model is extracted from the data, and the model is improved by interactions through the estimation procedure between the model and the data and the reactions of the researcher.”
Model Complexity and Sample Size
PLSSEM works efficiently with small sample sizes when models are complex (e.g., Fornell and Bookstein 1982; Willaby et al. 2015). Prior reviews of SEM use show that the average number of constructs per model is clearly higher in PLSSEM (approximately eight constructs; e.g., Kaufmann and Gaeckler 2015; Ringle et al. 2012) compared to factorbased SEM (approximately five constructs; e.g., Shah and Goldstein 2006; Baumgartner and Homburg 1996). Similarly, the number of indicators per construct is typically higher in PLSSEM compared to factorbased SEM, which is not surprising considering the negative effect of more indicators on χ ^{ 2 } based fit measures in factorbased SEM. Different from factorbased SEM, the PLSSEM algorithm does not simultaneously compute all the model relationships (see previous section), but instead uses separate ordinary least squares regressions to estimate the model’s partial regression relationships – as implied by its name. As a result, the overall number of model parameters can be extremely high in relation to the sample size as long as each partial regression relationship draws on a sufficient number of observations. Reinartz et al. (2009), Henseler et al. (2014), and Sarstedt et al. (2016b) show that PLSSEM provides solutions when other methods do not converge, or develop inadmissible solutions, regardless of whether using common factor or composite model data. However, as Hair et al. (2013, p. 2) note, “some researchers abuse this advantage by relying on extremely small samples relative to the underlying population” and that “PLSSEM has an erroneous reputation for offering special sampling capabilities that no other multivariate analysis tool has” (also see Marcoulides et al. 2009).
PLSSEM can be applied with smaller samples in many instances when other methods fail, but the legitimacy of such analyses depends on the size and the nature of the population (e.g., in terms of its heterogeneity). No statistical method – including PLSSEM – can offset a badly designed sample (Sarstedt et al. 2017). To determine the necessary sample size, researchers should run power analyses that take into account the model structure expected effect sizes and the significance level (e.g. Marcoulides and Chin 2013) and provide power tables for a range of path model constellations. Kock and Hadaya (2017) suggest two new methods for determining the minimum sample size in PLSSEM applications.
While much focus has been devoted to PLSSEM’s small sample size capabilities (e.g., Goodhue et al. 2012), discussions often ignore that the method is also suitable for the analysis of large data quantities such as those produced in Internet research, social media, and social network applications (e.g., Akter et al. 2017). Analyses of social media data typically focus on prediction, rely on complex models with little theoretical substantiation (Stieglitz et al. 2014), and often lack a comprehensive substantiation on the grounds of measurement theory (Rigdon 2013b). PLSSEM’s nonparametric nature, its ability to handle complex models with many (e.g., say eight or considerably more) constructs and indicators along with its high statistical power, makes the method a valuable method for social media analytics and the analysis of other types of largescale data.
GoodnessofFit
PLSSEM does not have an established goodnessoffit measure. As a consequence, some researchers conclude that PLSSEM’s use for theory testing and confirmation is limited (e.g., Westland 2015). Recent research has, however, started reexamining goodnessoffit measures proposed in the early days of PLSSEM (Lohmöller 1989) or suggesting new ones, thereby broadening the method’s applicability (Henseler et al. 2014; Dijkstra and Henseler 2015a). Examples of goodnessoffit measures suggested in a PLSSEM context include standardized root mean square residual (SRMR) , the root mean square residual covariance (RMS_{theta}) , the normed fit index (NFI ; also referred to as BentlerBonett index ), the nonnormed fit index (NNFI ; also referred to as TuckerLewis index ), and the exact model fit test (Dijkstra and Henseler 2015a; Lohmöller 1989; Henseler et al. 2014). Note that the goodnessoffit criterion proposed by Tenenhaus et al. (2005) does not represent a valid measure of model fit (Henseler and Sarstedt 2013). Also, the use of the NFI usually is not recommended as it systematically improves for more complex models (Hu and Bentler 1998).
Several notes of caution are important regarding the use of goodnessoffit measures in PLSSEM: First and foremost, literature casts doubt on whether measured fit – as understood in a factorbased SEM context – is a relevant concept for PLSSEM (Hair et al. 2017b; Rigdon 2012; Lohmöller 1989). Factorbased SEM follows an explanatory modeling perspective in that the algorithm estimates all the model parameters such that the divergence between the empirical covariance matrix and the modelimplied covariance matrix is minimized. In contrast, the PLSSEM algorithm follows a prediction modeling perspective in that the method aims to maximize the amount of explained variance of the endogenous latent variables. Explanation and prediction are two distinct concepts of statistical modeling and estimation. “In explanatory modeling the focus is on minimizing bias to obtain the most accurate representation of the underlying theory. In contrast, predictive modeling seeks to minimize the combination of bias and estimation variance, occasionally sacrificing theoretical accuracy for improved empirical precision” (Shmueli 2010, p. 293). Correspondingly, a grossly misspecified model can yield superior predictions whereas a correctly specified model can perform extremely poor in terms of prediction – see the Appendix in Shmueli (2010) for an illustration. Researchers using PLSSEM overcome this seeming dichotomy between explanatory and predictive modeling since they expect their model to have high predictive accuracy, while also being grounded in welldeveloped causal explanations. Gregor (2006, p. 626) refers to this interplay as explanation and prediction theory, noting that this approach “implies both understanding of underlying causes and prediction, as well as description of theoretical constructs and the relationships among them.” This perspective corresponds to Jöreskog and Wold’s (1982, p. 270) understanding of PLSSEM who labeled the method as a “causalpredictive” technique, meaning that when structural theory is strong, path relationships can be interpreted as causal. Hence, validation using goodnessoffit measures is also relevant in a PLSSEM context but less so compared to factorbased SEM. Instead, researchers should primarily rely on criteria that assess the model’s predictive performance (e.g., Rigdon 2012, 2014b). For example, Shmueli et al. (2016) introduced a new approach to assess PLSSEM’s outofsample prediction on the item level, which extends StoneGeisser’s crossvalidation for measuring the predictive relevance of PLS path models (Wold 1982).
Reasons for using PLSSEM
Reasons for using PLSSEM 

The goal is to predict and explain a key target construct and/or to identify its relevant antecedent constructs 
The path model is relatively complex as evidenced in many constructs per model (six or more) and indicators per construct (more than four indicators) 
The path model includes formatively measured constructs; note that factorbased SEM can also include formative measures but doing so requires conducting certain model adjustments to meet identification requirements; alternatively, formative constructs may be included as simple composites (based on equal weighting of the composite’s indicators; Grace and Bollen 2008) 
The sample size is limited (e.g., in businesstobusiness research) 
The research is based on secondary or archival data, which lack a comprehensive substantiation on the grounds of measurement theory 
The objective is to use latent variable scores in subsequent analyses 
The goal is to mimic factorbased SEM results of common factor models by using PLSc (e.g., when the model and/or data do not meet the requirements of factorbased SEM) 
Evaluation of PLSSEM Results
Procedure
Researchers have developed numerous guidelines for assessing PLSSEM results (Chin 1998, 2010; Götz et al. 2010; Henseler et al. 2009; Tenenhaus et al. 2005; Roldán and SánchezFranco 2012; Hair et al. 2017b). Starting with the measurement model assessment and continuing with the structural model assessment, these guidelines offer rules of thumb for interpreting the adequacy of the results. Note that a rule of thumb is a broadly applicable and easily applied guideline for decisionmaking that should not be strictly interpreted for every situation. Therefore, the threshold for a rule of thumb may vary.
Stage 1.1: Reflective Measurement Model Assessment
In the case of reflectively specified constructs, a researcher begins Stage 1 by examining the indicator loadings. Loadings above 0.70 indicate that the construct explains more than 50% of the indicator’s variance, demonstrating that the indicator exhibits a satisfactory degree of reliability.
For the composite reliability criterion, higher values indicate higher levels of reliability. For instance, researchers can consider values between 0.60 and 0.70 as “acceptable in exploratory research,” whereas results between 0.70 and 0.95 represent “satisfactory to good” reliability levels (Hair et al. 2017b, p. 112). However, values that are too high (e.g., higher than 0.95) are problematic, as they suggest that the items are almost identical and redundant. The reason may be (almost) the same item questions in a survey or undesirable response patterns such as straight lining (Diamantopoulos et al. 2012).
Generally, in PLSSEM Cronbach’s alpha is the lower bound, while ρ _{ c } is the upper bound of internal consistency reliability when estimating reflective measurement models with PLSSEM. Researchers should therefore consider both measures in their internal consistency reliability assessment. Alternatively, they may also consider assessing the reliability coefficient ρ _{ A } (Dijkstra and Henseler 2015b), which usually returns a value between Cronbach’s alpha and the composite reliability ρ _{ c }.
Therefore, high HTMT values indicate discriminant validity problems. Based on prior research and their simulation study results, Henseler et al. (2015) suggest a threshold value of 0.90 if the path model includes constructs that are conceptually very similar (e.g., affective satisfaction, cognitive satisfaction, and loyalty); that is, in this situation, an HTMT value exceeding 0.90 suggests a lack of discriminant validity. However, when the constructs in the path model are conceptually more distinct, researchers should consider 0.85 as threshold for HTMT (Henseler et al. 2015). Furthermore, using the bootstrapping procedure, researchers can formally test whether the HTMT value is significantly lower than one (also referred to as HTMT_{inference}).
Stage 1.2: Formative Measurement Model Assessment
Formatively specified constructs are evaluated differently from reflectively measured constructs. Their evaluation involves examination of (1) the convergent validity, (2) indicator collinearity, and (3) statistical significance and relevance of the indicator weights – see Fig. 3.
The convergent validity of formatively measured constructs is determined on the basis of the extent to which the construct correlates with a reflectively measured (or singleitem) construct capturing the same concept (also referred to as redundancy analysis ; Chin 1998). Accordingly, researchers must plan for the assessment of convergent validity in the research design stage by including a reflectively measured construct, or singleitem measure, of the formatively measured construct in the final questionnaire. Note that one should generally try to avoid using single items for construct measurement. Single items exhibit significantly lower levels of predictive validity compared to multiitem scales (Sarstedt et al. 2016a), which can be particularly problematic when using a variancebased analysis technique such as PLSSEM.
Higher R ^{ 2 } values in the kth regression imply that the variance of the kth item can be explained by the other items in the same measurement model, which indicates collinearity issues. Likewise, the higher the VIF, the greater the level of collinearity. As a rule of thumb, VIF values above 5 are indicative of collinearity among the indicators.
The third step in assessing formatively measured constructs is examining the statistical significance and relevance (i.e., the size) of the indicator weights. In contrast to regression analysis, PLSSEM does not make any distributional assumptions regarding the error terms that would facilitate the immediate testing of the weights’ significance based on, for example, the normal distribution. Instead, the researcher must run bootstrapping , a procedure that draws a large number of subsamples (typically 5,000) from the original data with replacement. The model is then estimated for each of the subsamples, yielding a high number of estimates for each model parameter.
Using the subsamples from bootstrapping, the researcher can construct a distribution of the parameter under consideration and compute bootstrap standard errors, which allow for determining the statistical significance of the original indicator weights. More precisely, bootstrap standard errors allow for computing t values (and corresponding p values). When interpreting the results, reviewers and editors should be aware that bootstrapping is a random process, which yields different results every time it is initiated. While the results from one bootstrapping run to the next generally do not differ fundamentally when using a large number of bootstrap samples such as 5,000, bootstrappingbased p values slightly lower than a predefined cutoff level should give rise to concern. In such a case, researchers may have repeatedly applied bootstrapping until a certain parameter has become significant, a practice referred to as phacking .

If the weight is statistically significant, the indicator is retained.

If the weight is nonsignificant, but the indicator’s loading is 0.50 or higher, the indicator is still retained if theory and expert judgment support its inclusion.

If the weight is nonsignificant and the loading is low (i.e., below 0.50), the indicator should be deleted from the measurement model.
Researchers must be cautious when deleting formative indicators based on statistical outcomes for at least the following two reasons. First, the indicator weight is a function of the number of indicators used to measure a construct: The higher the number of indicators, the lower their average weight. In other words, formative measurement models have an inherent limit to the number of indicators that can retain a statistically significant weight (e.g., Cenfetelli and Bassellier 2009). Second, as formative indicators define the construct’s empirical meaning, indicator deletion should be considered with caution and should generally be the exception. Content validity considerations are imperative before deleting formative indicators (e.g., Diamantopoulos and Winklhofer 2001).
Having assessed the formative indicator weights’ statistical significance, the final step is to examine each indicator’s relevance for shaping the construct (i.e., the relevance). In terms of relevance, indicator weights are standardized to values that are usually between −1 and +1, with weights closer to +1 (or −1) representing strong positive (or negative) relationships, and weights closer to 0 indicating weak relationships. Note that values below −1 and above +1 may technically occur, for instance, when collinearity is at critical levels.
Stage 2: Structural Model Assessment
Provided the measurement model assessment indicates satisfactory quality, the researcher moves to the assessment of the structural model in Stage 2 of the PLSSEM evaluation process (Fig. 3). After checking for potential collinearity issues among the constructs, this stage primarily focuses on learning about the predictive capabilities of the model, as indicated by the following criteria: coefficient of determination (R ^{ 2 }), crossvalidated redundancy (Q ^{ 2 }), and the path coefficients.
Computation of the path coefficients linking the constructs is based on a series of regression analyses. Therefore, the researcher must ascertain that collinearity issues do not bias the regression results. This step is analogous to the formative measurement model assessment, with the difference that the scores of the exogenous latent variables serve as input for the VIF assessments. VIF values above 5 are indicative of collinearity among the predictor constructs.
The next step involves reviewing the R ^{ 2 } , which indicates the variance explained in each of the endogenous constructs. The R ^{ 2 } ranges from 0 to 1, with higher levels indicting more predictive accuracy. As a rough rule of thumb, the R ^{ 2 } values of 0.75, 0.50, and 0.25 can be considered substantial, moderate, and weak (Henseler et al. 2009; Hair et al. 2011). It is important to note, however, that, in some research contexts, R ^{ 2 } values of 0.10 are considered satisfactory, for example, in the context of predicting stock returns (e.g., Raithel et al. 2012). Against this background, the researcher should always interpret the R ^{ 2 } in the context of the study at hand by considering R ^{ 2 } values from related studies.
Another means to assess the model’s predictive accuracy is the Q ^{ 2 } value (Geisser 1974; Stone 1974). The Q ^{ 2 } value builds on the blindfolding procedure , which omits single points in the data matrix, imputes the omitted elements, and estimates the model parameters. Using these estimates as input, the blindfolding procedure predicts the omitted data points. This process is repeated until every data point has been omitted and the model reestimated. The smaller the difference between the predicted and the original values, the greater the Q ^{ 2 } criterion and, thus, the model’s predictive accuracy and relevance. As a rule of thumb, Q ^{ 2 } values larger than zero for a particular endogenous construct indicate that the path model’s predictive accuracy is acceptable for this particular construct.
To initiate the blindfolding procedure, researchers need to determine the sequence of data points to be omitted in each run. An omission distance of 7, for example, implies that every seventh data point of the endogenous construct’s indicators is eliminated in a single blindfolding run. Hair et al. (2017a) suggest using an omission distance between 5 and 10. Furthermore, there are two approaches to calculating the Q ^{ 2 } value: crossvalidated redundancy and crossvalidated communality, the former of which is generally recommended to explore the predictive relevance of the PLS path model (Wold 1982). Analogous to the f ^{ 2 } effect size, researchers can also analyze the q ^{ 2 } effect size, which indicates the change in the Q ^{ 2 } value when a specified exogenous construct is omitted from the model. As a relative measure of predictive relevance, q ^{ 2 } values of 0.02, 0.15, and 0.35 indicate that an exogenous construct has a small, medium, or large predictive relevance, respectively, for a certain endogenous construct.
A downside of these metrics is that they tend to overfit a particular sample if predictive validity is evaluated in the same sample used for estimation (Shmueli 2010). This critique holds particularly for the R ^{ 2 }, which is considered a measure of the model’s predictive accuracy in terms of insample prediction (Rigdon 2012). Predictive validity assessment, however, requires assessing prediction on the grounds of holdout samples. But the overfitting problem also applies to the Q ^{ 2 } values, whose computation does not draw on holdout samples, but on single data points (as opposed to entire observations) being omitted and imputed. Hence, the Q ^{ 2 } values can only be partly considered a measure of outofsample prediction , because the sample structure remains largely intact in its computation. As Shmueli et al. (2016) point out, “fundamental to a proper predictive procedure is the ability to predict measurable information on new cases.” As a remedy, Shmueli et al. (2016) developed the PLSpredict procedure for generating holdout sample based point predictions in PLS path models on an item or construct level.
Subsequently, the strength and significance of the path coefficients is evaluated regarding the relationships (structural paths) hypothesized between the constructs. Similar to the assessment of formative indicator weights, the significance assessment builds on bootstrapping standard errors as a basis for calculating t and p values of path coefficients, or – as recommended in recent literature – their (biascorrected and accelerated) confidence intervals (AguirreUrreta and Rönkkö 2017). A path coefficient is significant at the 5% probability of error level if zero does not fall into the 95% (biascorrected and accelerated) confidence interval. For example, a path coefficient of 0.15 with 0.1 and 0.2 as lower and upper bounds of the 95% (biascorrected and accelerated) confidence interval would be considered significant since zero does not fall into this confidence interval. On the contrary, with a lower bound of −0.05 and an upper bound of 0.35, we would consider this coefficient as not significant.
In terms of relevance, path coefficients are usually between −1 to +1, with coefficients closer to +1 representing strong positive relationships, and those closer to −1 indicating strong negative relationships (note that values below −1 and above +1 may technically occur, for instance, when collinearity is at critical levels). A path coefficient of say 0.5 implies that if the independent construct increases by one standard deviation unit, the dependent construct will increase by 0.5 standard deviation units when keeping all other independent constructs constant. Determining whether the size of the coefficient is meaningful should be decided within the research context. When examining the structural model results, researchers should also interpret total effects. These correspond to the sum of the direct effect and all indirect effects between two constructs in the path model. With regard to the path model shown in Fig. 1, Y _{ 1 } has a direct effect (b _{ 1 }) and an indirect effect (b _{ 2 } · b _{ 3 }) via Y _{ 2 } on the endogenous construct Y _{ 3 }. Hence, the total effect of Y _{ 1 } on Y _{ 3 } is b _{ 1 } + b _{ 2 } · b _{ 3 }. The examination of total effects between constructs, including all their indirect effects, provides a more comprehensive picture of the structural model relationships (Nitzl et al. 2016).
Research Application
Corporate Reputation Model
The empirical application builds on the corporate reputation model and data that Hair et al. (2017b) use in their book Primer on Partial Least Squares Structural Equation Modeling (PLSSEM), and that Hair et al. (2018) also employ in their Advanced Issues in Partial Least Squares Structural Equation Modeling. The PLS path model creation and estimation draws on the software SmartPLS 3. The software, model files, and datasets used in this market research application can be downloaded at http://www.smartpls.com.
Item wordings (Hair et al. 2017b)
Attractiveness (ATTR) – formative  
attr_1  [The company] is successful in attracting highquality employees 
attr_2  I could see myself working at [the company] 
attr_3  I like the physical appearance of [the company] (company, buildings, shops, etc.) 
Competence (COMP) – reflective  
comp_1  [The company] is a top competitor in its market 
comp_2  As far as I know, [the company] is recognized worldwide 
comp_3  I believe that [the company] performs at a premium level 
Corporate Social Responsibility (CSOR) – formative  
csor_1  [The company] behaves in a socially conscious way 
csor_2  [The company] is forthright in giving information to the public 
csor_3  [The company] has a fair attitude toward competitors 
csor_4  [The company] is concerned about the preservation of the environment 
csor_5  [The company] is not only concerned about profits 
Customer loyalty (CUSL) – reflective  
cusl_1  I would recommend [company] to friends and relatives. 
cusl_2  If I had to choose again, I would choose [company] as my mobile phone services provider 
cusl_3  I will remain a customer of [company] in the future 
Customer satisfaction (CUSA) – single item  
cusa  If you consider your experiences with [company], how satisfied are you with [company]? 
Likeability (LIKE) – reflective  
like_1  [The company] is a company that I can better identify with than other companies 
like_2  [The company] is a company that I would regret more not having if it no longer existed than I would other companies 
like_3  I regard [the company] as a likeable company 
Quality (QUAL) – formative  
qual_1  The products/services offered by [the company] are of high quality 
qual_2  [The company] is an innovator, rather than an imitator with respect to [industry] 
qual_3  [The company]’s products/services offer good value for money 
qual_4  The services [the company] offers are good 
qual_5  Customer concerns are held in high regard at [the company] 
qual_6  [The company] is a reliable partner for customers 
qual_7  [The company] is a trustworthy company 
qual_8  I have a lot of respect for [the company] 
Performance (PERF) – formative  
perf_1  [The company] is a very wellmanaged company 
perf_2  [The company] is an economically stable company 
perf_3  The business risk for [the company] is modest compared to its competitors 
perf_4  [The company] has growth potential 
perf_5  [The company] has a clear vision about the future of the company 
Data
The model estimation draws on data from four German mobile communications network providers. A total of 344 respondents rated the questions related to the items on a 7point Likert scale, whereby a value of 7 always represents the best possible judgment and a value of 1 the opposite. The most complex partial regression in the PLS path model has eight independent variables (i.e., the formative measurement model of QUAL). Hence, based on power statistics as suggested by, this sample size is technically large enough to estimate the PLS path model. Specifically, to detect R ^{2} values of around 0.25 and assuming a power level of 80% and a significance level of 5%, one would need merely 54 observations. The dataset has only 11 missing values, which are coded with the value −99. The maximum number of missing data points per item is 4 of 334 (1.16%) in cusl_2. Since the relative number of missing values is very small, we continue the analysis by using the mean value replacement of missing data option. Box plots diagnostic by means of IBM SPSS Statistics (see chapter 5 in Sarstedt and Mooi 2014) reveals influential observations, but no outliers. Finally, the skewness and excess kurtosis values, as provided by the SmartPLS 3 data view, show that all the indicators are within the −1 and +1 acceptable range. The only slight exception is the cusl_2 indicator (i.e., skewness of −1.30). However, this degree of nonnormality of data in a single indicator is not a critical issue.
Model Estimation
The model estimation uses the basic PLSSEM algorithm by Lohmöller (1989), the path weighting scheme, a maximum of 300 iterations, a stop criterion of 0.0000001 (or 1 × 10^{−7}), and equal indicator weights for the initialization (default settings in the SmartPLS 3 software). After running the algorithm, it is important to ascertain that the algorithm converged (i.e., the stop criterion has been reached) and did not reach the maximum number of iterations. However, the PLSSEM algorithm practically always converges, even in very complex market research applications (Henseler 2010).
Results Evaluation
Reflective Measurement Model Assessment
PLSSEM assessment results of reflective measurement models
Latent variable  Indicators  Convergent validity  Internal consistency reliability  

Loadings  Indicator reliability  AVE  Composite reliability ρ _{ c }  Reliability ρ _{ A }  Cronbach’s alpha  
>0.70  >0.50  >0.50  >0.70  >0.70  0.70–0.90  
COMP  comp_1  0.824  0.679  0.688  0.869  0.786  0.776 
comp_2  0.821  0.674  
comp_3  0.844  0.712  
CUSL  cusl_1  0.833  0.694  0.748  0.899  0.839  0.831 
cusl_2  0.917  0.841  
cusl_3  0.843  0.711  
LIKE  like_1  0.880  0.774  0.747  0.899  0.836  0.831 
like_2  0.869  0.755  
like_3  0.844  0.712 
HTMT values
COMP  CUSA  CUSL  LIKE  

COMP  
CUSA  0.465 [0.364;0.565]  
CUSL  0.532 [0.421;0.638]  0.755 [0.684;0.814]  
LIKE  0.780 [0.690;0.853]  0.577 [0.489;0.661]  0.737 [0.653;0.816] 
The CUSA construct is not included in the reflective (and subsequent formative) measurement model assessment, because it is a singleitem construct. For this construct indicator data and latent variable scores are identical. Consequently, CUSA does not have a measurement model, which can be assessed using the standard evaluation criteria.
Formative Measurement Model Assessment
Formative indicator weights and significance testing results
Formative constructs  Formative indicators  Outer weights (Outer loadings)  95% BCa confidence interval  Significant (p <0.05)? 

ATTR  attr_1  0.414 (0.755)  [0.273, 0.554]  Yes 
attr_2  0.201 (0.506)  [0.073, 0.332]  Yes  
attr_3  0.658 (0.891)  [0.516, 0.761]  Yes  
CSOR  csor_1  0.306 (0.771)  [0.145, 0.458]  Yes 
csor_2  0.037 (0.571)  [−0.082, 0.167]  No  
csor_3  0.406 (0.838)  [0.229, 0.550]  Yes  
csor_4  0.080 (0.617)  [−0.063, 0.219]  No  
csor_5  0.416 (0.848)  [0.260, 0.611]  Yes  
PERF  perf_1  0.468 (0.846)  [0.310, 0.582]  Yes 
perf_2  0.177 (0.690)  [0.044, 0.311]  Yes  
perf_3  0.194 (0.573)  [0.069, 0.298]  Yes  
perf_4  0.340 (0.717)  [0.205, 0.479]  Yes  
perf_5  0.199 (0.638)  [0.070, 0.350]  Yes  
QUAL  qual_1  0.202 (0.741)  [0.103, 0.332]  Yes 
qual_2  0.041 (0.570)  [−0.074, 0.148]  No  
qual_3  0.106 (0.749)  [−0.004, 0.227]  No  
qual_4  −0.005 (0.664)  [−0.119, 0.081]  No  
qual_5  0.160 (0.787)  [0.045, 0.267]  Yes  
qual_6  0.398 (0.856)  [0.275, 0.501]  Yes  
qual_7  0.229 (0.722)  [0.094, 0.328]  Yes  
qual_8  0.190 (0.627)  [0.060, 0.303]  Yes 
The results of the reflective and formative measurement model assessment suggest that all construct measures exhibit satisfactory levels of reliability and validity. We can therefore proceed with the assessment of the structural model.
Structural Model Assessment
Path coefficients of the structural model and significance testing results
Path coefficient  95% BCa confidence interval  Significant (p <0.05)?  f ^{ 2 } effect size  q ^{ 2 } effect size  

ATTR → COMP  0.086  [−0.013, 0.187]  No  0.009  <0.001 
ATTR → LIKE  0.167  [0.032, 0.273]  Yes  0.030  0.016 
COMP → CUSA  0.146  [0.033, 0.275]  Yes  0.018  0.006 
COMP → CUSL  0.006  [−0.111, 0.113]  No  <0.001  −0.002 
CSOR → COMP  0.059  [−0.036, 0.156]  No  0.005  −0.005 
CSOR → LIKE  0.178  [0.084, 0.288]  Yes  0.035  0.018 
CUSA → CUSL  0.505  [0.428, 0.580]  Yes  0.412  0.229 
LIKE → CUSA  0.436  [0.312, 0.544]  Yes  0.159  0.151 
LIKE → CUSL  0.344  [0.241, 0.453]  Yes  0.138  0.077 
PERF → COMP  0.295  [0.172, 0.417]  Yes  0.082  0.039 
PERF → LIKE  0.117  [−0.012, 0.254]  No  0.011  0.003 
QUAL → COMP  0.430  [0.295, 0.560]  Yes  0.143  0.052 
QUAL → LIKE  0.380  [0.278, 0.514]  Yes  0.094  0.050 
When analyzing the key predictors of LIKE, which has a substantial R ^{ 2 } value of 0.558, we find that QUAL has the strongest significant effect (0.380), followed by CSOR (0.178), and ATTR (0.167); PERF (0.117) has the weakest effect on LIKE, which is not significant at the 5% level (Table 6). Corporate reputation’s cognitive dimension COMP also has a substantial R ^{ 2 } value of 0.631. Analyzing this construct’s predictors shows that QUAL (0.430) and PERF (0.295) have the strongest significant effects. On the contrary, the effects of ATTR (0.086) and CSOR (0.059) on COMP are not significant at the 5% level. Analyzing the exogenous constructs’ total effects on CUSL shows that QUAL has the strongest total effect (0.248), followed by CSOR (0.105), ATTR (0.101), and PERF (0.089). These results suggest that companies should focus on marketing activities that positively influence the customers’ perception of the quality of their products and services.
Table 6 also shows the f ^{ 2 } effect sizes. Relatively high f ^{ 2 } effect sizes occur for the relationships CUSA ➔ CUSL (0.412), LIKE ➔ CUSA (0.159), QUAL ➔ COMP (0.143), and LIKE ➔ CUSL (0.138). These relationships also have particularly strong path coefficients of 0.30 and higher. Interestingly, the relationship between QUAL and LIKE has a strong path coefficient of 0.380 but only a weak f ^{ 2 } effect size of 0.094. All the other f ^{ 2 } effect sizes in the structural model are weak and, if below 0.02, negligible.
Finally, we determine the predictive relevance of the PLS path model by carrying out the blindfolding procedure using an omission distance D = 7. The resulting crossvalidated redundancy Q ^{ 2 } values are above zero for all endogenous constructs, providing support for the model’s predictive accuracy. More precisely, CUSL and COMP have the highest Q ^{ 2 } values (0.415), followed by LIKE (0.406) and, finally, CUSA (0.280). Further analysis of the q ^{ 2 } shows that the relationships CUSA ➔ CUSL (0.229) and LIKE ➔ CUSA (0.151) have moderate q ^{ 2 } effect sizes of 0.15 and higher. All other q ^{ 2 } effect sizes are weak and, if below 0.02, negligible.
Conclusions
Most prior research discussing the benefits and limitations of PLSSEM or analyzing its performance (e.g., in terms of parameter estimation) has not acknowledged that the method takes on a fundamentally different philosophy of measurement compared to factorbased SEM (e.g., Rigdon et al. 2017). Rather than assuming a common factor model structure, PLSSEM draws on composite model logic to represent reflective and formative measurement models. The method linearly combines sets (or blocks) of indicators to form composites that represent the conceptual variables and assesses the extent to which these measures are valid and reliable (Tenenhaus et al. 2005). In other words, PLSSEM is an approximation method that inherently recognizes that constructs and conceptual variables are not identical (Rigdon et al. 2017). For several reasons, factor indeterminacy being the most prominent one, this view of measurement is more reasonable than the common factor model logic assumed by factorbased SEM, which equates constructs and conceptual variables. As Rigdon (2016, p. 19) notes, “common factor proxies cannot be assumed to carry greater significance than composite proxies in regard to the existence or nature of conceptual variables.”
PLSSEM offers a good approximation of common factor models in situations where factorbased SEM cannot deliver results due to its methodological limitations in terms of model complexity, sample size requirements, or inclusion of composite variables in the model (Reinartz et al. 2009; Sarstedt et al. 2016b; Willaby et al. 2015). Dijkstra and Henseler’s (2015b) PLSc allows researchers mimicking factorbased SEM results while benefiting from the original PLSSEM method’s flexibility in terms of model specification. However, such an analysis rests on the implicit assumption that factorbased SEM is the correct estimator that delivers the true results as a benchmark for SEM.
While standard PLSSEM analyses provide important insights into the strength and significance of the hypothesized model relationships, more advanced modeling and estimating techniques shed further light on the nature of the proposed relationships. Research has brought forward a variety of complementary analysis techniques and procedures, which extend the methodological toolbox of researchers working with the method. Examples of these methods include the confirmatory tetrad analysis (CTAPLS), which enables researchers to statistically test if the measurement model operationalization should rather build on effect or composite indicators (Gudergan et al. 2008), and latent class techniques, which allow assessing if unobserved heterogeneity affects the model estimates. Prominent examples of PLSSEMbased latent class techniques include finite mixture partial least squares (Hahn et al. 2002; Sarstedt et al. 2011a), PLS genetic algorithm segmentation (Ringle et al. 2013, 2014), predictionoriented segmentation (Becker et al. 2013b), and iterative reweighted regressions (Schlittgen et al. 2016). Further methods to account for heterogeneity in the structural model include the analysis of moderating effects (Henseler and Chin 2010), nonlinear effects (Henseler et al. 2012a), and the multigroup analysis (Sarstedt et al. 2011b), including testing for measurement invariance (Henseler et al. 2016b). A further complementary method, the importanceperformance map analysis (IPMA), facilitates richer outcome discussions in that it extends the analysis of total effects in the model by adding a second results dimension to the analysis which incorporates the average values of the latent variables (Ringle and Sarstedt 2016). Hair et al. (2018) provide a more detailed overview and introduction to these complementary techniques for more advanced PLSSEM analyses.
CrossReferences
Notes
Acknowledgment
This chapter uses the statistical software SmartPLS 3 (http://www.smartpls.com). Ringle acknowledges a financial interest in SmartPLS.
References
 Aaker, D. A. (1991). Managing brand equity: Capitalizing on the value of a brand name. New York: Free Press.Google Scholar
 AguirreUrreta, M. I., & Rönkkö, M. (2017). Statistical inference with PLSc using bootstrap confidence intervals. MIS Quarterly, forthcoming.Google Scholar
 Akter, S., Wamba, S. F., & Dewan, S. (2017). Why PLSSEM is suitable for complex modeling? An empirical illustration in big data analytics quality. Production Planning & Control, 28(1112), 1011–1021.Google Scholar
 Ali, F., Rasoolemanesh, S. M., Sarstedt, M., Ringle, C. M., & Ryu, K. (2017). An assessment of the use of partial least squares structural equation modeling (PLSSEM) in hospitality research. International Journal of Contemporary Hospitality Management, forthcoming.Google Scholar
 Baumgartner, H., & Homburg, C. (1996). Applications of structural equation modeling in marketing and consumer research: A review. International Journal of Research in Marketing, 13(2), 139–161.CrossRefGoogle Scholar
 Becker, J.M., & Ismail, I. R. (2016). Accounting for sampling weights in PLS path modeling: Simulations and empirical examples. European Management Journal, 34(6), 606–617.CrossRefGoogle Scholar
 Becker, J.M., Rai, A., & Rigdon, E. E. (2013a). Predictive validity and formative measurement in structural equation modeling: Embracing practical relevance. In: 2013 Proceedings of the international conference on information systems, Milan.Google Scholar
 Becker, J.M., Rai, A., Ringle, C. M., & Völckner, F. (2013b). Discovering unobserved heterogeneity in structural equation models to avert validity threats. MIS Quarterly, 37(3), 665–694.Google Scholar
 Bollen, K. A. (1989). Structural equations with latent variables. New York: Wiley.CrossRefGoogle Scholar
 Bollen, K. A. (2002). Latent variables in psychology and the social sciences. Annual Review of Psychology, 53(1), 605–634.CrossRefGoogle Scholar
 Bollen, K. A. (2011). Evaluating effect, composite, and causal indicators in structural equation models. MIS Quarterly, 35(2), 359–372.Google Scholar
 Bollen, K. A., & Bauldry, S. (2011). Three Cs in measurement models: Causal indicators, composite indicators, and covariates. Psychological Methods, 16(3), 265–284.CrossRefGoogle Scholar
 Bollen, K. A., & Diamantopoulos, A. (2017). In defense of causal–formative indicators: A minority report. Psychological Methods, forthcoming.Google Scholar
 Bollen, K. A., & Lennox, R. (1991). Conventional wisdom on measurement: A structural equation perspective. Psychological Bulletin, 110(2), 305–314.CrossRefGoogle Scholar
 Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2003). The theoretical status of latent variables. Psychological Review, 110(2), 203–219.CrossRefGoogle Scholar
 Cenfetelli, R. T., & Bassellier, G. (2009). Interpretation of formative measurement in information systems research. MIS Quarterly, 33(4), 689–708.Google Scholar
 Chin, W. W. (1998). The partial least squares approach to structural equation modeling. In G. A. Marcoulides (Ed.), Modern methods for business research (pp. 295–358). Mahwah: Erlbaum.Google Scholar
 Chin, W. W. (2010). How to write up and report PLS analyses. In: V. Esposito Vinzi, W. W. Chin, J. Henseler, & H. Wang (Eds.), Handbook of partial least squares: Concepts, methods and applications (Springer Handbooks of Computational Statistics Series, Vol. II, pp. 655–690). Heidelberg/Dordrecht/London/New York: Springer.Google Scholar
 Chin, W. W., Marcolin, B. L., & Newsted, P. R. (2003). A partial least squares latent variable modeling approach for measuring interaction effects: Results from a Monte Carlo simulation study and an electronicmail emotion/adoption study. Information Systems Research, 14(2), 189–217.CrossRefGoogle Scholar
 Chou, C.P., Bentler, P. M., & Satorra, A. (1991). Scaled test statistics and robust standard errors for nonnormal data in covariance structure analysis: A Monte Carlo study. British Journal of Mathematical and Statistical Psychology, 44(2), 347–357.CrossRefGoogle Scholar
 Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Mahwah: Lawrence Erlbaum Associates.Google Scholar
 Coltman, T., Devinney, T. M., Midgley, D. F., & Venaik, S. (2008). Formative versus reflective measurement models: Two applications of formative measurement. Journal of Business Research, 61(12), 1250–1262.CrossRefGoogle Scholar
 Diamantopoulos, A. (2006). The error term in formative measurement models: Interpretation and modeling implications. Journal of Modeling in Management, 1(1), 7–17.CrossRefGoogle Scholar
 Diamantopoulos, A. (2011). Incorporating formative measures into covariancebased structural equation models. MIS Quarterly, 35(2), 335–358.Google Scholar
 Diamantopoulos, A., & Winklhofer, H. M. (2001). Index construction with formative indicators: An alternative to scale development. Journal of Marketing Research, 38(2), 269–277.CrossRefGoogle Scholar
 Diamantopoulos, A., Sarstedt, M., Fuchs, C., Wilczynski, P., & Kaiser, S. (2012). Guidelines for choosing between multiitem and singleitem scales for construct measurement: A predictive validity perspective. Journal of the Academy of Marketing Science, 40(3), 434–449.CrossRefGoogle Scholar
 Dijkstra, T. K. (2010). Latent variables and indices: Herman world’s basic design and partial least squares. In V. Esposito Vinzi, W. W. Chin, J. Henseler, H. Wang (Eds.), Handbook of partial least squares: Concepts, methods and applications (Springer Handbooks of Computational Statistics Series, Vol. II, pp. 23–46). Heidelberg/Dordrecht/London/New York: Springer.Google Scholar
 Dijkstra, T. K., & Henseler, J. (2011). Linear indices in nonlinear structural equation models: Best fitting proper indices and other composites. Quality & Quantity, 45(6), 1505–1518.CrossRefGoogle Scholar
 Dijkstra, T. K., & Henseler, J. (2015a). Consistent and asymptotically normal PLS estimators for linear structural equations. Computational Statistics & Data Analysis, 81(1), 10–23.CrossRefGoogle Scholar
 Dijkstra, T. K., & Henseler, J. (2015b). Consistent partial least squares path modeling. MIS Quarterly, 39(2), 297–316.CrossRefGoogle Scholar
 Dijkstra, T. K., & SchermellehEngel, K. (2014). Consistent partial least squares for nonlinear structural equation models. Psychometrika, 79(4), 585–604.CrossRefGoogle Scholar
 do Valle, P. O., & Assaker, G. (2016). Using partial least squares structural equation modeling in tourism research: A review of past research and recommendations for future applications. Journal of Travel Research, 55(6), 695–708.CrossRefGoogle Scholar
 Eberl, M. (2010). An application of PLS in multigroup analysis: The need for differentiated corporatelevel marketing in the mobile communications industry. In: V. Esposito Vinzi, W. W. Chin, J. Henseler, & H. Wang (Eds.), Handbook of partial least squares: Concepts, methods and applications (Springer Handbooks of Computational Statistics Series, Vol. II, pp. 487–514). Heidelberg/Dordrecht/London/New York: Springer.Google Scholar
 Eberl, M., & Schwaiger, M. (2005). Corporate reputation: Disentangling the effects on financial performance. European Journal of Marketing, 39(7/8), 838–854.CrossRefGoogle Scholar
 Edwards, J. R., & Bagozzi, R. P. (2000). On the nature and direction of relationships between constructs and measures. Psychological Methods, 5(2), 155–174.CrossRefGoogle Scholar
 Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap. New York: Chapman & Hall.CrossRefGoogle Scholar
 Esposito Vinzi, V., Chin, W. W., Henseler, J., & Wang, H. (Eds.) (2010). Handbook of partial least squares: Concepts, methods and applications (Springer Handbooks of Computational Statistics Series, Vol. II). Heidelberg/Dordrecht/London/New York: Springer.Google Scholar
 Evermann, J., & Tate, M. (2016). Assessing the predictive performance of structural equation model estimators. Journal of Business Research, 69(10), 4565–4582.CrossRefGoogle Scholar
 Falk, R. F., & Miller, N. B. (1992). A primer for soft modeling. Akron: University of Akron Press.Google Scholar
 Fornell, C. G., & Bookstein, F. L. (1982). Two structural equation models: LISREL and PLS applied to consumer exitvoice theory. Journal of Marketing Research, 19(4), 440–452.CrossRefGoogle Scholar
 Garson, G. D. (2016). Partial least squares regression and structural equation models. Asheboro: Statistical Associates.Google Scholar
 Geisser, S. (1974). A predictive approach to the random effects model. Biometrika, 61(1), 101–107.CrossRefGoogle Scholar
 Goodhue, D. L., Lewis, W., & Thompson, R. (2012). Does PLS have advantages for small sample size or nonnormal data? MIS Quarterly, 36(3), 981–1001.Google Scholar
 Götz, O., LiehrGobbers, K., & Krafft, M. (2010). Evaluation of structural equation models using the partial least squares (PLS) approach. In: V. Esposito Vinzi, W. W. Chin, J. Henseler, & H. Wang (Eds.), Handbook of partial least squares: Concepts, methods and applications (Springer Handbooks of Computational Statistics Series, Vol. II, pp. 691–711). Heidelberg/Dordrecht/London/New York: Springer.Google Scholar
 Grace, J. B., & Bollen, K. A. (2008). Representing general theoretical concepts in structural equation models: The role of composite variables. Environmental and Ecological Statistics, 15(2), 191–213.CrossRefGoogle Scholar
 Gregor, S. (2006). The nature of theory in information systems. MIS Quarterly, 30(3), 611–642.Google Scholar
 Gudergan, S. P., Ringle, C. M., Wende, S., & Will, A. (2008). Confirmatory tetrad analysis in PLS path modeling. Journal of Business Research, 61(12), 1238–1249.CrossRefGoogle Scholar
 Haenlein, M., & Kaplan, A. M. (2004). A beginner’s guide to partial least squares analysis. Understanding Statistics, 3(4), 283–297.CrossRefGoogle Scholar
 Hahn, C., Johnson, M. D., Herrmann, A., & Huber, F. (2002). Capturing customer heterogeneity using a finite mixture PLS approach. Schmalenbach Business Review, 54(3), 243–269.Google Scholar
 Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLSSEM: Indeed a silver bullet. Journal of Marketing Theory and Practice, 19(2), 139–151.CrossRefGoogle Scholar
 Hair, J. F., Sarstedt, M., Pieper, T. M., & Ringle, C. M. (2012a). The use of partial least squares structural equation modeling in strategic management research: A review of past practices and recommendations for future applications. Long Range Planning, 45(5–6), 320–340.CrossRefGoogle Scholar
 Hair, J. F., Sarstedt, M., Ringle, C. M., & Mena, J. A. (2012b). An assessment of the use of partial least squares structural equation modeling in marketing research. Journal of the Academy of Marketing Science, 40(3), 414–433.CrossRefGoogle Scholar
 Hair, J. F., Ringle, C. M., & Sarstedt, M. (2013). Partial least squares structural equation modeling: Rigorous applications, better results and higher acceptance. Long Range Planning, 46(1–2), 1–12.CrossRefGoogle Scholar
 Hair, J. F., Hollingsworth, C. L., Randolph, A. B., & Chong, A. Y. L. (2017a). An updated and expanded assessment of PLSSEM in information systems research. Industrial Management & Data Systems, 117(3), 442–458.CrossRefGoogle Scholar
 Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2017b). A primer on partial least squares structural equation modeling (PLSSEM) (2nd ed.). Thousand Oaks: Sage.Google Scholar
 Hair, J. F., Hult, G. T. M., Ringle, C. M., Sarstedt, M., & Thiele, K. O. (2017c). Mirror, mirror on the wall: A comparative evaluation of compositebased structural equation modeling methods. Journal of the Academy of Marketing Science, forthcoming.Google Scholar
 Hair, J. F., Sarstedt, M., Ringle, C. M., & Gudergan, S. P. (2018). Advanced issues in partial least squares structural equation modeling (PLSSEM). Thousand Oaks: Sage.Google Scholar
 Helm, S., Eggert, A., & Garnefeld, I. (2010). Modelling the impact of corporate reputation on customer satisfaction and loyalty using PLS. In V. Esposito Vinzi, W. W. Chin, J. Henseler, & H. Wang (Eds.), Handbook of partial least squares: Concepts, methods and applications (Springer Handbooks of Computational Statistics Series, Vol. II, pp. 515–534). Heidelberg/Dordrecht/London/New York: Springer.Google Scholar
 Henseler, J. (2010). On the convergence of the partial least squares path modeling algorithm. Computational Statistics, 25(1), 107–120.CrossRefGoogle Scholar
 Henseler, J. (2017). Using variancebased structural equation modeling for empirical advertising research at the interface of design and behavioral research. Journal of Advertising, 46(1), 178–192.CrossRefGoogle Scholar
 Henseler, J., & Chin, W. W. (2010). A comparison of approaches for the analysis of interaction effects between latent variables using partial least squares path modeling. Structural Equation Modeling, 17(1), 82–109.CrossRefGoogle Scholar
 Henseler, J., & Sarstedt, M. (2013). Goodnessoffit indices for partial least squares path modeling. Computational Statistics, 28(2), 565–580.CrossRefGoogle Scholar
 Henseler, J., Ringle, C. M., & Sinkovics, R. R. (2009). The use of partial least squares path modeling in international marketing. In R. R. Sinkovics & P. N. Ghauri (Eds.), Advances in international marketing (Vol. 20, pp. 277–320). Bingley: Emerald.CrossRefGoogle Scholar
 Henseler, J., Fassott, G., Dijkstra, T. K., & Wilson, B. (2012a). Analyzing quadratic effects of formative constructs by means of variancebased structural equation modelling. European Journal of Information Systems, 21(1), 99–112.CrossRefGoogle Scholar
 Henseler, J., Ringle, C. M., & Sarstedt, M. (2012b). Using partial least squares path modeling in international advertising research: Basic concepts and recent issues. In S. Okazaki (Ed.), Handbook of research in international advertising (pp. 252–276). Cheltenham: Edward Elgar Publishing.Google Scholar
 Henseler, J., Dijkstra, T. K., Sarstedt, M., Ringle, C. M., Diamantopoulos, A., Straub, D. W., Ketchen, D. J., Hair, J. F., Hult, G. T. M., & Calantone, R. J. (2014). Common beliefs and reality about partial least squares: Comments on Rönkkö & Evermann (2013). Organizational Research Methods, 17(2), 182–209.CrossRefGoogle Scholar
 Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variancebased structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–135.CrossRefGoogle Scholar
 Henseler, J., Hubona, G. S., & Ray, P. A. (2016a). Using PLS path modeling in new technology research: Updated guidelines. Industrial Management & Data Systems, 116(1), 1–19.CrossRefGoogle Scholar
 Henseler, J., Ringle, C. M., & Sarstedt, M. (2016b). Testing measurement invariance of composites using partial least squares. International Marketing Review, 33(3), 405–431.CrossRefGoogle Scholar
 Houston, M. B. (2004). Assessing the validity of secondary data proxies for marketing constructs. Journal of Business Research, 57(2), 154–161.CrossRefGoogle Scholar
 Hu, L.t., & Bentler, P. M. (1998). Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification. Psychological Methods, 3(4), 424–453.CrossRefGoogle Scholar
 Hui, B. S., & Wold, H. O. A. (1982). Consistency and consistency at large of partial least squares estimates. In K. G. Jöreskog & H. O. A. Wold (Eds.), Systems under indirect observation, part II (pp. 119–130). Amsterdam: North Holland.Google Scholar
 Jöreskog, K. G. (1971). Simultaneous factor analysis in several populations. Psychometrika, 36(4), 409–426.CrossRefGoogle Scholar
 Jöreskog, K. G. (1973). A general method for estimating a linear structural equation system. In A. S. Goldberger & O. D. Duncan (Eds.), Structural equation models in the social sciences (pp. 255–284). New York: Seminar Press.Google Scholar
 Jöreskog, K. G., & Wold, H. O. A. (1982). The ML and PLS techniques for modeling with latent variables: Historical and comparative aspects. In H. O. A. Wold & K. G. Jöreskog (Eds.), Systems under indirect observation, part I (pp. 263–270). Amsterdam: NorthHolland.Google Scholar
 Kaufmann, L., & Gaeckler, J. (2015). A structured review of partial least squares in supply chain management research. Journal of Purchasing and Supply Management, 21(4), 259–272.CrossRefGoogle Scholar
 Kock, N., & Hadaya, P. (2017). Minimum sample size estimation in PLSSEM: The inverse square root and gammaexponential methods. Information Systems Journal, forthcoming.Google Scholar
 Latan, H., & Noonan, R. (Eds.). (2017). Partial least squares structural equation modeling: Basic concepts, methodological issues and applications. Heidelberg: Springer.Google Scholar
 Lee, L., Petter, S., Fayard, D., & Robinson, S. (2011). On the use of partial least squares path modeling in accounting research. International Journal of Accounting Information Systems, 12(4), 305–328.CrossRefGoogle Scholar
 Lei, P.W., & Wu, Q. (2012). Estimation in structural equation modeling. In R. H. Hoyle (Ed.), Handbook of structural equation modeling (pp. 164–179). New York: Guilford Press.Google Scholar
 Lohmöller, J.B. (1989). Latent variable path modeling with partial least squares. Heidelberg: Physica.CrossRefGoogle Scholar
 Marcoulides, G. A., & Chin, W. W. (2013). You write, but others read: Common methodological misunderstandings in PLS and related methods. In H. Abdi, W. W. Chin, V. Esposito Vinzi, G. Russolillo, & L. Trinchera (Eds.), New perspectives in partial least squares and related methods (Springer Proceedings in Mathematics & Statistics, Vol. 56, pp. 31–64). New York: Springer.Google Scholar
 Marcoulides, G. A., & Saunders, C. (2006). PLS: A silver bullet? MIS Quarterly, 30(2), III–IIX.Google Scholar
 Marcoulides, G. A., Chin, W. W., & Saunders, C. (2009). Foreword: A critical look at partial least squares modeling. MIS Quarterly, 33(1), 171–175.Google Scholar
 Marcoulides, G. A., Chin, W. W., & Saunders, C. (2012). When imprecise statistical statements become problematic: A response to Goodhue, Lewis, and Thompson. MIS Quarterly, 36(3), 717–728.Google Scholar
 MateosAparicio, G. (2011). Partial least squares (PLS) methods: Origins, evolution, and application to social sciences. Communications in Statistics – Theory and Methods, 40(13), 2305–2317.CrossRefGoogle Scholar
 McDonald, R. P. (1996). Path analysis with composite variables. Multivariate Behavioral Research, 31(2), 239–270.CrossRefGoogle Scholar
 Nitzl, C. (2016). The use of partial least squares structural equation modelling (PLSSEM) in management accounting research: Directions for future theory development. Journal of Accounting Literature, 37, 19–35.CrossRefGoogle Scholar
 Nitzl, C., & Chin, W. W. (2017). The case of partial least squares (PLS) path modeling in managerial accounting. Journal of Management Control, 28(2), 137–156.Google Scholar
 Nitzl, C., Roldán, J. L., & Cepeda Carrión, G. (2016). Mediation analysis in partial least squares path modeling: Helping researchers discuss more sophisticated models. Industrial Management & Data Systems, 119(9), 1849–1864.CrossRefGoogle Scholar
 Noonan, R., & Wold, H. O. A. (1982). PLS path modeling with indirectly observed variables: A comparison of alternative estimates for the latent variable. In K. G. Jöreskog & H. O. A. Wold (Eds.), Systems under indirect observations: Part II (pp. 75–94). Amsterdam: NorthHolland.Google Scholar
 Nunnally, J. C., & Bernstein, I. (1994). Psychometric theory (3rd ed.). New York: McGraw Hill.Google Scholar
 Olsson, U. H., Foss, T., Troye, S. V., & Howell, R. D. (2000). The performance of ML, GLS, and WLS estimation in structural equation modeling under conditions of misspecification and nonnormality. Structural Equation Modeling: A Multidisciplinary Journal, 7(4), 557–595.CrossRefGoogle Scholar
 Peng, D. X., & Lai, F. (2012). Using partial least squares in operations management research: A practical guideline and summary of past research. Journal of Operations Management, 30(6), 467–480.CrossRefGoogle Scholar
 Raithel, S., & Schwaiger, M. (2015). The effects of corporate reputation perceptions of the general public on shareholder value. Strategic Management Journal, 36(6), 945–956.CrossRefGoogle Scholar
 Raithel, S., Sarstedt, M., Scharf, S., & Schwaiger, M. (2012). On the value relevance of customer satisfaction. Multiple drivers and multiple markets. Journal of the Academy of Marketing Science, 40(4), 509–525.CrossRefGoogle Scholar
 Ramayah, T., Cheah, J., Chuah, F., Ting, H., & Memon, M. A. (2016). Partial least squares structural equation modeling (PLSSEM) using SmartPLS 3.0: An updated and practical guide to statistical analysis. Singapore: Pearson.Google Scholar
 Reinartz, W. J., Haenlein, M., & Henseler, J. (2009). An empirical comparison of the efficacy of covariancebased and variancebased SEM. International Journal of Research in Marketing, 26(4), 332–344.CrossRefGoogle Scholar
 Richter, N. F., Sinkovics, R. R., Ringle, C. M., & Schlägel, C. (2016a). A critical look at the use of SEM in international business research. International Marketing Review, 33(3), 376–404.CrossRefGoogle Scholar
 Richter, N. F., Cepeda, G., Roldán, J. L, Ringle, C. M. (2016b) European management research using partial least squares structural equation modeling (PLSSEM). European Management Journal 34 (6):589597Google Scholar
 Rigdon, E. E. (2012). Rethinking partial least squares path modeling: In praise of simple methods. Long Range Planning, 45(5–6), 341–358.CrossRefGoogle Scholar
 Rigdon, E. E. (2013a). Partial least squares path modeling. In G. R. Hancock & R. O. Mueller (Eds.), Structural equation modeling: A second course, seconded (Vol. 1). Charlotte: Information Age Publishing.Google Scholar
 Rigdon, E. E. (2013b). Partial least squares path modeling. In G. R. Hancock & R. O. Mueller (Eds.), Structural equation modeling. A second course (2nd ed., pp. 81–116). Charlotte: Information Age Publishing.Google Scholar
 Rigdon, E. E. (2014a). Comment on “Improper use of endogenous formative variables”. Journal of Business Research, 67(1), 2800–2802.CrossRefGoogle Scholar
 Rigdon, E. E. (2014b). Rethinking partial least squares path modeling: Breaking chains and forging ahead. Long Range Planning, 47(3), 161–167.CrossRefGoogle Scholar
 Rigdon, E. E. (2016). Choosing PLS path modeling as analytical method in European management research: A realist perspective. European Management Journal, 34(6), 598–605.CrossRefGoogle Scholar
 Rigdon, E. E., Becker, J.M., Rai, A., Ringle, C. M., Diamantopoulos, A., Karahanna, E., Straub, D., & Dijkstra, T. K. (2014). Conflating antecedents and formative indicators: A comment on AguirreUrreta and Marakas. Information Systems Research, 25(4), 780–784.CrossRefGoogle Scholar
 Rigdon, E. E., Sarstedt, M., & Ringle, C. M. (2017). On comparing results from CBSEM And PLSSEM. Five perspectives and five recommendations. Marketing ZFP – Journal of Research and Management, forthcoming.Google Scholar
 Ringle, C. M., & Sarstedt, M. (2016). Gain more insight from your PLSSEM results: The importanceperformance map analysis. Industrial Management & Data Systems, 116(9), 1865–1886.CrossRefGoogle Scholar
 Ringle, C. M., Sarstedt, M., & Straub, D. W. (2012). A critical look at the use of PLSSEM in MIS quarterly. MIS Quarterly, 36(1), iii–xiv.Google Scholar
 Ringle, C. M., Sarstedt, M., Schlittgen, R., & Taylor, C. R. (2013). PLS path modeling and evolutionary segmentation. Journal of Business Research, 66(9), 1318–1324.CrossRefGoogle Scholar
 Ringle, C. M., Sarstedt, M., & Schlittgen, R. (2014). Genetic algorithm segmentation in partial least squares structural equation modeling. OR Spectrum, 36(1), 251–276.CrossRefGoogle Scholar
 Roldán, J. L., & SánchezFranco, M. J. (2012). Variancebased structural equation modeling: Guidelines for using partial least squares in information systems research. In M. Mora, O. Gelman, A. L. Steenkamp, & M. Raisinghani (Eds.), Research methodologies, innovations and philosophies in software systems engineering and information systems (pp. 193–221). Hershey: IGI Global.CrossRefGoogle Scholar
 Sarstedt, M., & Mooi, E. A. (2014). A concise guide to market research: The process, data, and methods using IBM SPSS statistics. Heidelberg: Springer.CrossRefGoogle Scholar
 Sarstedt, M., Becker, J.M., Ringle, C. M., & Schwaiger, M. (2011a). Uncovering and treating unobserved heterogeneity with FIMIXPLS: Which model selection criterion provides an appropriate number of segments? Schmalenbach Business Review, 63(1), 34–62.Google Scholar
 Sarstedt, M., Henseler, J., & Ringle, C. M. (2011b). Multigroup analysis in partial least squares (PLS) path modeling: Alternative methods and empirical results. In M. Sarstedt, M. Schwaiger, & C. R. Taylor (Eds.), Advances in international marketing (Vol. 22, pp. 195–218). Bingley: Emerald.Google Scholar
 Sarstedt, M., Wilczynski, P., & Melewar, T. C. (2013). Measuring reputation in global markets – A comparison of reputation measures’ convergent and criterion validities. Journal of World Business, 48(3), 329–339.CrossRefGoogle Scholar
 Sarstedt, M., Ringle, C. M., Smith, D., Reams, R., & Hair, J. F. (2014). Partial least squares structural equation modeling (PLSSEM): A useful tool for family business researchers. Journal of Family Business Strategy, 5(1), 105–115.CrossRefGoogle Scholar
 Sarstedt, M., Diamantopoulos, A., Salzberger, T., & Baumgartner, P. (2016a). Selecting single items to measure doublyconcrete constructs: A cautionary tale. Journal of Business Research, 69(8), 3159–3167.CrossRefGoogle Scholar
 Sarstedt, M., Hair, J. F., Ringle, C. M., Thiele, K. O., & Gudergan, S. P. (2016b). Estimation issues with PLS and CBSEM: Where the bias lies! Journal of Business Research, 69(10), 3998–4010.CrossRefGoogle Scholar
 Sarstedt, M., Bengart, P., Shaltoni, A. M., & Lehmann, S. (2017). The use of sampling methods in advertising research: A gap between theory and practice. International Journal of Advertising, forthcoming.Google Scholar
 Schlittgen, R., Ringle, C. M., Sarstedt, M., & Becker, J.M. (2016). Segmentation of PLS path models by iterative reweighted regressions. Journal of Business Research, 69(10), 4583–4592.CrossRefGoogle Scholar
 Schloderer, M. P., Sarstedt, M., & Ringle, C. M. (2014). The relevance of reputation in the nonprofit sector: The moderating effect of sociodemographic characteristics. International Journal of Nonprofit and Voluntary Sector Marketing, 19(2), 110–126.CrossRefGoogle Scholar
 Schwaiger, M. (2004). Components and parameters of corporate. Reputation: An empirical study. Schmalenbach Business Review, 56(1), 46–71.Google Scholar
 Shah, R., & Goldstein, S. M. (2006). Use of structural equation modeling in operations management research: Looking back and forward. Journal of Operations Management, 24(2), 148–169.CrossRefGoogle Scholar
 Shmueli, G. (2010). To explain or to predict? Statistical Science, 25(3), 289–310.CrossRefGoogle Scholar
 Shmueli, G., Ray, S., Velasquez Estrada, J. M., & Chatla, S. B. (2016). The elephant in the room: Evaluating the predictive performance of PLS models. Journal of Business Research, 69(10), 4552–4564.CrossRefGoogle Scholar
 Sosik, J. J., Kahai, S. S., & Piovoso, M. J. (2009). Silver bullet or voodoo statistics? A primer for using the partial least squares data analytic technique in group and organization research. Group & Organization Management, 34(1), 5–36.CrossRefGoogle Scholar
 Stieglitz, S., DangXuan, L., Bruns, A., & Neuberger, C. (2014). Social media analytics. Business & Information Systems Engineering, 6(2), 89–96.Google Scholar
 Stone, M. (1974). Crossvalidatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society, 36(2), 111–147.Google Scholar
 Tenenhaus, M., Esposito Vinzi, V., Chatelin, Y.M., & Lauro, C. (2005). PLS path modeling. Computational Statistics & Data Analysis, 48(1), 159–205.CrossRefGoogle Scholar
 Westland, J. C. (2015). Partial least squares path analysis. In Structural equation models: From paths to networks (pp. 23–46). Cham: Springer International Publishing.Google Scholar
 Willaby, H. W., Costa, D. S. J., Burns, B. D., MacCann, C., & Roberts, R. D. (2015). Testing complex models with small sample sizes: A historical overview and empirical demonstration of what partial least squares (PLS) can offer differential psychology. Personality and Individual Differences, 84, 73–78.CrossRefGoogle Scholar
 Wold H. O. A. (1975). Path models with latent variables: The NIPALS approach. In H. M. Blalock, A. Aganbegian, F. M. Borodkin, R. Boudon, & V. Capecchi (Eds.), Quantitative sociology: International perspectives on mathematical and statistical modeling (pp. 307–357). New York: Academic.Google Scholar
 Wold H. O. A. (1980). Model construction and evaluation when theoretical knowledge is scarce: Theory and application of PLS. In J. Kmenta & J. B. Ramsey (Eds.), Evaluation of econometric models (pp. 47–74). New York: Academic.Google Scholar
 Wold H. O. A. (1982). Soft modeling: The basic design and some extensions. In K. G. Jöreskog & H. O. A. Wold (Eds.), Systems under indirect observations: Part II (pp. 1–54). Amsterdam: NorthHolland.Google Scholar
 Wold H. O. A. (1985). Partial least squares. In S. Kotz & N. L. Johnson (Eds.), Encyclopedia of statistical sciences (Vol. 6, pp. 581–591). New York: Wiley.Google Scholar