Skip to main content
Log in

MIMIC models, formative indicators and the joys of research

  • Review
  • Published:
AMS Review Aims and scope Submit manuscript

Abstract

The use of MIMIC models in formative measurement is revisited in light of recent criticism concerning their validity. Specifically, the conventional MIMIC model is compared to alternative specifications recently suggested in the literature and issues relating to replacing formative latent variables with composites, predefining rather than estimating indicator weights, and aggregating indicators into single scores are discussed. Based on this analysis, concrete guidelines are provided to researchers on how to employ MIMIC models with formative indicators in their research endeavors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig 1
Fig 2

Similar content being viewed by others

Notes

  1. Needless to say, our researchers considered also the possibility of differentially weighting the five formative indicators. Unfortunately, the scholarly literature on cafeteria satisfaction was not sufficiently developed to enable a theoretically-justifiable allocation of differential indicator weights. They thus set the weights to .20 by adopting the following algorithm: dividing 1 by the number of indicators. Although not particularly sophisticated, this approach resulted in weights that – as discussed in a later section – worked well.

  2. This view had already been expressed by Rossiter (2002) more than 10 years ago in his original exposition of the C-OAR-SE procedure: “when the conceptual definition calls for it, the components … should be weighted before computing the index score … Components in formed measures … should not be empirically weighted” (Rossiter 2002, pp. 315, 325).

  3. Specifically, the following modeling guidelines are provided by Lee et al. (2013, p. 15): “(a) Use predefined weights for formative indicators that are explicitly part of the construct definition. (b) Specify the weights using some explicit prior theory (e.g., the weights are all the same) or use some empirical method to determine the weights (e.g., a survey of key informants, Delphi method, or utility function methods). (c) Use these weights to create a single composite score for the formative variable, using a standard algorithm that is also explicitly part of the construct definition. (d) Use the single composite score to test theoretical models (e.g., to identify patterns of covariance between the composite score and other variables)”. Again, the similarity to Rossiter’s (2002) C-OAR-SE guidelines for constructing formed-attribute scales is evident.

  4. However, its fit was significantly worse than the fit of both Model 2 and Model 1. Note that – as a moment’s reflection will readily reveal – predefinition of equal indicator weights by the researcher can never result in a better model fit statistic (i.e., lower χ2) than estimation of the weights while constraining the latter to be equal. This implies that Model 4 can never outperform Model 3 in terms of fit (although it can result in pretty similar fit as was the case in our example).

  5. As Bollen and Bauldry (2011, p. 271, added emphasis) observe, “it is theoretically possible but extremely unlikely, that the disturbance variance … is zero, which would make the η1 variable completely determined by its causal indicators. In this hypothetical case the coefficients of the causal indicators would remain structural even though the disturbance is absent”.

  6. Indeed, using a MIMIC model allows one to scrutinize if this is actually the case. If as Lee et al. (2013, p. 11) argue, the formatively-measured latent variable (“generation quality” η1) and the reflectively measured latent variable (“managers’ perceptions of the quality of information generation” η2) should really be “different things” and “may not be good proxies for each other” then this would cast doubt on the conceptual soundness of the quality concept’s operationalization.

  7. As an aside, it should be noted that Lee et al.’s (2013, Figure 4, p.12) proposed two-variable model is directly equivalent to a MIMIC model because “all formative constructs predicting a single latent variable that is measured reflectively with two or more indicators can be transformed to a MIMIC model” (Bagozzi 2011, p. 273; see also Diamantopoulos 2011, 2013). In essence, what CLC propose instead of the MIMIC model is another MIMIC model!

  8. As Bollen (1989, pp. 13–14) observes, “the disturbance ζi includes those variables that influence ηi but are excluded from the ηi equation. We assume that these numerous omitted factors fused into ζi have E(ζi) = 0 and are uncorrelated with the exogenous variables”.

  9. Hardin et al. (2011) in their MIMIC model of virtual team goal effectiveness also found that compared to the excellent fit obtained with a non-zero disturbance (χ 2(8) = 13.47, p = .097; RMSEA = .053; CFI = .996), model fit suffered dramatically by eliminating the error term (χ 2(9) = 247.52, p < .0001; RMSEA = .331; CFI = .801).

  10. The reason for this is because in the expression used to calculate the standardized coefficients, the common unstandardized weight cancels out (the formal proof is available – for a small fee – from the authors upon request). Note that this is not the case for the modified versions of Model 3 and Model 4 which include a freely-estimated construct-level error term.

  11. In fact, as long as the weights remain equal, overall fit for Model 5 stays the same regardless of the specific weights allocated.

  12. In fact, Model 5 is strictly speaking also a MIMIC model (with a single cause and three reflective outcomes – see Figure 2).

  13. For example, the fact that – as the coefficients for Model 1 in Table 3 show – x1 seems to be much more strongly related to satisfaction than the other indicators, a finding which might be of potential managerial value.

  14. To illustrate these differences in thinking about measurement, consider the following example. After one of the long nights in the local bar, our friends were simply curious to know how much alcohol they had consumed. Following CLC, they constructed an observed variable by simply counting the number of glasses of (a) beer, (b) wine, (c) hard liquor, and (d) any other types of alcohol (aperitifs, digestifs, etc.) to come up with the total amount consumed (not revealed here!). Note that if our friends had instead intended to gauge their degree of alcoholization, they could have used the same (objective) information. However, in this case, the consumed glasses of beer, wine etc. would have been modeled as formative indicators of a latent variable and their structural links to the latter would have been subject to estimation and testing.

  15. A MIMIC model of graduate students’ satisfaction with the local dry cleaners was the next project in line.

References

  • Bagozzi, R. P. (2011). Measurement and meaning in information systems and organizational research: methodological and philosophical foundations. MIS Quarterly, 35(2), 261–292.

    Google Scholar 

  • Bollen, K. A. (1989). Structural equations with latent variables. New York: John Wiley & Sons.

    Google Scholar 

  • Bollen, K. A. (2007). Interpretational confounding is due to misspecification, not to type of indicator: comment on Howell, Breivik, and Wilcox. Psychological Methods, 12(2), 219–228.

    Article  Google Scholar 

  • Bollen, K. A. (2011). Evaluating effect, composite, and causal indicators in structural equation models. MIS Quarterly, 35(2), 359–372.

    Google Scholar 

  • Bollen, K. A., & Bauldry, S. (2011). Three Cs in measurement models: causal indicators, composite indicators, and covariates. Psychological Methods, 16(3), 265–284.

    Article  Google Scholar 

  • Bollen, K. A., & Davis, W. (2009). Causal indicator models: identification, estimation, and testing. Structural Equation Modeling, 16(3), 498–522.

    Article  Google Scholar 

  • Bollen, K. A., & Lennox, R. (1991). Conventional wisdom in measurement: a structural equations perspective. Psychological Bulletin, 110(2), 305–314.

    Article  Google Scholar 

  • Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2003). The theoretical status of latent variables. Psychological Review, 110(2), 203–219.

    Article  Google Scholar 

  • Byrne, B. M. (1998). Structural equation modeling with LISREL, PRELIS and SIMPLIS: Basic concepts, applications and programming. Mahwah: Lawrence Erlbaum Associates.

    Google Scholar 

  • Cadogan, J. W., & Lee, N. J. (2013). Improper use of endogenous formative variables. Journal of Business Research, 66(2), 233–241.

    Article  Google Scholar 

  • Cadogan, J. W., Souchon, A. L., & Procter, D. B. (2008). The quality of market-oriented behaviors: formative index construction. Journal of Business Research, 61(12), 1263–1277.

    Article  Google Scholar 

  • Cadogan, J. W., Lee, N., & Chamberlain, L. (2013). Formative variables are unreal variables: why the formative MIMIC model is invalid. AMS Review, 3(1), 38–49.

    Article  Google Scholar 

  • Cenfetelli, R. T., & Bassellier, G. (2009). Interpretation of formative measurement in information systems research. MIS Quarterly, 33(4), 689–707.

    Google Scholar 

  • Diamantopoulos, A. (2006). The error term in formative measurement models: interpretation and modeling implications. Journal of Modelling in Management, 1(1), 7–17.

    Article  Google Scholar 

  • Diamantopoulos, A. (2011). Incorporating formative measures into covariance-based structural equation models. MIS Quarterly, 35(2), 335–358.

    Google Scholar 

  • Diamantopoulos, A. (2013). MIMIC models and formative measurement: some thoughts on Lee, Cadogan & Chamberlain. AMS Review, 3(1), 30–37.

    Article  Google Scholar 

  • Diamantopoulos, A., & Papadopoulos, N. (2010). Assessing the cross-national invariance of formative measures: guidelines for international business researchers. Journal of International Business Studies, 41(2), 360–370.

    Article  Google Scholar 

  • Diamantopoulos, A., & Siguaw, J. A. (2000). Introducing LISREL: a guide for the uninitiated. London: Sage Publications.

    Google Scholar 

  • Diamantopoulos, A., & Siguaw, J. A. (2006). Formative versus reflective indicators in organizational measure development: a comparison and empirical illustration. British Journal of Management, 17(4), 263–282.

    Article  Google Scholar 

  • Diamantopoulos, A., & Winklhofer, H. (2001). Index construction with formative indicators: an alternative to scale development. Journal of Marketing Research, 38(2), 269–277.

    Article  Google Scholar 

  • Diamantopoulos, A., Riefler, P., & Roth, K. P. (2008). Advancing formative measurement models. Journal of Business Research, 61(12), 1203–1218.

    Article  Google Scholar 

  • Edwards, J. E. (2010). The fallacy of formative measurement. Organizational Research Methods, 14(2), 370–388.

    Article  Google Scholar 

  • Franke, G., Preacher, K. J., & Rigdon, E. (2008). Proportional structural effects of formative indicators. Journal of Business Research, 61(12), 1229–1237.

    Article  Google Scholar 

  • Grace, J. B., & Bollen, K. A. (2008). Representing general theoretical concepts in structural equation models: the role of composite variables. Environmental and Ecological Statistics, 15(2), 191–213.

    Article  Google Scholar 

  • Hardin, A. M., Chang, J. C., Fuller, M. A., & Torkzadeh, G. (2011). Formative measurement and academic research: in search of measurement theory. Educational and Psychological Measurement, 71(2), 281–305.

    Article  Google Scholar 

  • Howell, R. D. (2013). Conceptual clarity in measurement – constructs, composites, and causes: a commentary on Lee, Cadogan and Chamberlain. AMS Review, 3(1), 18–23.

    Article  Google Scholar 

  • Howell, R. D., Breivik, E., & Wilcox, J. B. (2007). Reconsidering formative measurement. Psychological Methods, 12(2), 205–218.

    Article  Google Scholar 

  • Jarvis, C. B., Mackenzie, S. B., & Podsakoff, P. M. (2003). A critical review of construct indicators and measurement model misspecification in marketing and consumer research. Journal of Consumer Research, 30(4), 199–218.

    Article  Google Scholar 

  • Jöreskog, K. G., & Goldberger, A. S. (1975). Estimation of a model with multiple indicators and multiple causes of a single latent variable source. Journal of the American Statistical Association, 70(351), 631–639.

    Article  Google Scholar 

  • Kline, R. B. (2011). Principles and practice of structural equation modeling (3rd ed.). New York: Guilford Press.

    Google Scholar 

  • Lee, N., & Cadogan, J. W. (2013). Problems with formative and higher order reflective variables. Journal of Business Research, 66(2), 242–247.

    Article  Google Scholar 

  • Lee, N., Cadogan, J. W., & Chamberlain, L. (2013). The MIMIC model and formative variables: problems and solutions. AMS Review, 3(1), 3–17.

    Article  Google Scholar 

  • MacKenzie, S. B. (2003). The dangers of poor construct conceptualization. Journal of the Academy of Marketing Science, 31(3), 323–326.

    Article  Google Scholar 

  • Petter, S., Straub, D., & Rai, A. (2007). Specifying formative constructs in information systems research. MIS Quarterly, 31(4), 623–656.

    Google Scholar 

  • Rigdon, E. W. (2013). Lee, Cadogan, and Chamberlain: an excellent point … but what about that iceberg? AMS Review, 3(1), 24–29.

    Article  Google Scholar 

  • Rossiter, J. R. (2002). The C-OAR-SE procedure for scale development in marketing. International Journal of Research in Marketing, 19(4), 305–335.

    Article  Google Scholar 

  • Rossiter, J. R. (2005). Reminder: a horse is a horse. International Journal of Research in Marketing, 22(1), 23–25.

    Article  Google Scholar 

  • Rossiter, J. R. (2008). Content validity of measures of abstract constructs in management and organizational research. British Journal of Management, 19(4), 380–388.

    Article  Google Scholar 

  • Rossiter, J. R. (2011). Marketing measurement revolution: the C-OARSE method and why it must replace psychometrics. European Journal of Marketing, 45(11/12), 1561–1588.

    Article  Google Scholar 

  • Schumacker, R. E., & Lomax, R. G. (2010). A beginner’s guide to structural equation modeling (3rd ed.). New York: Routledge.

    Google Scholar 

  • Winklhofer, H., & Diamantopoulos, A. (2002). Managerial evaluation of sales forecasting effectiveness: a MIMIC modeling approach. International Journal of Research in Marketing, 19(2), 151–166.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adamantios Diamantopoulos.

APPENDIX – A note on debating tactics

APPENDIX – A note on debating tactics

Whilst reading the four CLC papers that were attached to the mysterious anonymous email, our researchers could not help but notice that, in responding to Diamantopoulos’ (2013) commentary, Cadogan et al. (2013) adopted some truly ingenious debating tactics to defend their original position. As these tactics may be of use to other researchers wishing to silence critical voices, they are briefly described below:

  1. 1.

    Putting (lots of) words into the opponent’s mouth

Cadogan et al. (2013, p. 38) begin their rejoinder to Diamantopoulos’s (2013) commentary by stating that “his stance can be summed up in the following way:

It is entirely possible for a singular entity, with singular conceptual content, to also be multifaceted in conceptual content. Likewise, it is possible for a grouping of conceptually different entities, that is, a grouping of multiple entities that potentially have conceptually orthogonal meanings, to also have singular, equivalent, conceptual content. In other words, there is no such thing as either unidimensionality or multidimensionality of variables: whether an entity is unidimensional or multidimensional is in the hands of the individual researcher, such that if a researcher wishes to do so, she can decide that a variable is both unidimensional and multidimensional at the same time. As such, the MIMIC model is a usable tool for modeling formative variables. On this reading, Diamantopoulos’ stance appears to be illogical and contradictory”.

As anyone who has bothered to read Diamantopoulos (2013) will readily confirm, nowhere in his commentary is there any mention of either “unidimensionality” or “multidimensionality” issues. There is also no reference to “multifaceted in conceptual content”, “singular, equivalent, conceptual content”, or “conceptually orthogonal meanings” (whatever this might mean!). It therefore remains a mystery how the above passage can represent a summary of Diamantopoulos’ (2013) stance on MIMIC models given than that none of the terms/issues above were even mentioned (let alone discussed) in his commentary!

  1. 2.

    Denial of the (very) obvious

Cadogan et al. (2013, p. 44) further state that “although at no point in LCC do we say that the Advertising Expenditure (AE) measure was developed by Diamantopoulos and Winklhofer (2001), it appears that Diamantopoulos (2013) got the impression that we did”.

The exact passage from Lee et al. (2013) is reproduced below and it is left to the reader to judge how it could possibly be interpreted as not attributing the said example to Diamantopoulos and Winklhofer (2001)!

“Alternatively, consider a variable such as ‘Advertising Expenditure,’ one of Diamantopoulos and Winklhofer’s (2001 pp. 275) examples of a formatively-modeled variable” (Lee et al. 2013, p. 7, emphasis added).

  1. 3.

    Glossing over (much) substance

Cadogan et al. (2013, p. 44 added emphasis) also maintain that “we should have made authorship of the scale more obvious, and we should also have been more explicit when describing the AE variable and its origins. That said, the way that LCC attributed authorship and discussed operationalization of the original AE measure has no bearing on the logic being used by LCC to make its point”.

So, in essence, the reader is told that the fact that Lee et al. (2013) (1) misattributed authorship, (2) evidently misconstrued the AE variable as referring to actual advertising expenditures rather than perceived evaluation (by managers) of the firm’s advertising expenditures in different media relative to competition, (3) used inappropriate reflective items in their MIMIC model of AE, and (4) completely ignored that the measurement model for AE was misspecified in the first place, has “no bearing” on the logic of their argument! A case of stubbornness or arrogance? Again, it is left to the audience to decide.

  1. 4.

    Quoting in (completely) wrong context

Cadogan et al. (2013, p. 39, added emphasis) state that “perhaps Diamantopoulos is right, and LCC, in challenging a well-established methodological tool, have joined Howell (2013), Rigdon (2013), Borsboom (e.g., Borsboom 2005), and the like, forming a body of – his word – ‘misguided’ academics who cannot see some obvious truth, who are failing to grasp something fundamental about “what things are”, and “how we measure them””.

Ignoring the rather “evangelical” tone of the above sentence, suffice it to say that the term ‘misguided’ as used by Diamantopoulos did not refer to individuals but to the (rigid) adherence to a particular position; the relevant passage is reproduced below and the reader is – yet again – invited to draw his/her own conclusions.

“Thus, rigidly believing that “a MIMIC model is not a formative latent variable model but is rather a reflective variable model” (LCC 2013) appears to be misguided” (Diamantopoulos 2013, p. 33).

The masterly combination of the above debating tactics by Cadogan et al. (2013) undoubtedly provides a serious challenge to Rossiter’s (2005) seminal work on how (not) to respond to criticism – novice researchers, please take note!

Rights and permissions

Reprints and permissions

About this article

Cite this article

Diamantopoulos, A., Temme, D. MIMIC models, formative indicators and the joys of research. AMS Rev 3, 160–170 (2013). https://doi.org/10.1007/s13162-013-0050-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13162-013-0050-0

Keywords

Navigation