Methods requirements are ratcheting up in our review process, causing Houston (2016) to characterize the review process as a “methods gauntlet.” All of us want rigor in our research, and some seem to believe that that means using the latest or most complicated method. I disagree. Rigorous (defined as extremely thorough, exhaustive, or accurate) does not imply newness or complicatedness. Rigorous explicitly includes accuracy, and it implies that one has chosen the right method for the problem. I contend that we may have lost sight of rigor in our review process. Through this paper, I call for a re-thinking of the nature of rigorous marketing strategy research. In particular, I will discuss three new and complicated methods that are establishing something like imperium (i.e., power to command)—they are becoming a necessary condition for publication. I’ll use insights from those who apply these methods to argue that the methods are not appropriate for most marketing strategy applications, hence cannot be rigorous.

Structural models

In the review process we are sometimes told that a “reduced-form model” (typically a regression model) is inadequate. Instead, we are told, we need a “structural model”—a model that incorporates agents’ optimizing microbehavior (i.e., customers’ forward looking behaviors or firms’ responses to competitors’ actions). With such a model, one can foresee likely behavior when policy or strategy changes, a thing which cannot be done with a reduced-form model. However, the incorporation of optimizing microbehavior requires the imposition of a set of restrictive assumptions about the form of the utility or objective function, the form of the budget or other constraints, the distribution of unobservable components, and, possibly, the nature of equilibrium in a given market (Chintagunta et al. 2006; Chan 2006). Violation of these assumptions could lead to incorrect inferences (Chintagunta et al. 2006; Mazzeo 2006). Despite this, structural models’ assumptions are seldom tested (Chintagunta et al. 2006). Perhaps because these assumptions are expected to be violated, Chintagunta et al. (2006) and others (Srinivasan 2006; Hartman 2006; Chan 2006; Punj 2006) note that structural models are expected to fit and predict the data less well than reduced-form models. In sum, in order to foresee reactions to policy or strategy change, a structural model requires strong assumptions that are not expected to hold in practice. Because violation of those assumptions corrupts the structural model’s inferences, one should expect a reduced-form model to outperform the structural model.

Given the above, it makes no sense that a paper built on a reduced-form model should be rejected because the model is not structural. Instead, we should ask structural modelers to report the extent to which their structural model’s fit and predictions are inferior to those of the related reduced-form model. With that information, a reader can determine his or her level of confidence in the foresight provided by that structural model.

Endogeneity

One of the most frequently levelled charges by reviewers is that the variable of interest is endogenous, where endogeneity is loosely defined as correlation between a predictor and the error term in a regression. That is, in the following simplified model:

$$ {\mathrm{Y}}_{\mathrm{t}}=\mathrm{a}+\mathrm{b}\ {\mathrm{X}}_{\mathrm{t}}+{\mathrm{error}}_{\mathrm{t}} $$

X is endogenous if Xt and errort are correlated. When this happens, the estimated coefficient for X, ( \( \widehat{b} \)) can be biased. A frequent fix for this potential bias is the instrumental variables technique. It is important to note, though, that using an instrumental variable does not completely remove bias. Instead, if instrumental variable assumptions hold, this technique yields an estimate of X’s coefficient that is “consistent”, i.e., unbiased in infinite samples. Those instrumental variable assumptions which must hold are:

  1. 1.

    The instrumental variable, A, is correlated with the endogenous predictor, X.

    (This is referred to as “inclusions restriction.” The strength of the correlation between A and X is referred to as the “strength” of the instrumental variable.)

  2. 2.

    Other than its influence through X, the instrumental variable, A, must not be correlated with the dependent variable, Y.

    (This is referred to as the “exclusion restriction.” The instrument is only “valid” if this assumption holds. Importantly, this assumption cannot be tested and must, therefore, be argued from theory.)

Rossi (2014) tells us that even if it were the case that the second assumption held, the fact that sample sizes are not infinite means that instrumental variable–based estimates can have substantial bias and exceptionally large confidence intervals (i.e., instrumental variable–based estimates can be biased and less accurate than regression estimates).

Rossi (2014) also points out that it is probably not the case that the second assumption holds in marketing applications. For the 46 applications of instrumental variables published in Marketing Science or QME in the 10 years prior to Rossi’s (2014) article, effective theoretical support for instrument validity was not provided. Further, Rossi (2014, p. 671) admits that he “cannot imagine any economic argument that could justify the use of lagged predictors” (the most frequently used instrumental variable in marketing applications) as an instrument. Importantly, if this second assumption underlying the instrumental variable technique does not hold, then using the instrumental variable technique definitely degrades the quality of the coefficient estimate leaving regression estimates superior.

Finally, Rossi (2014, p. 670) tells us that “there is no evidence from the firm side (for example, from pricing experiments) that endogeneity biases are large in panel or time series data.” He also says that: “If our goal is to help the firm make better decisions … [o]ne may actually prefer estimators that do not attempt to adjust for endogeneity (such as OLS) for this purpose. OLS can have a much lower [error] than an [instrumental variable] method” (Rossi 2014, p. 664).

In summary, instrumental variable method estimates are very likely to be worse than regression estimates in marketing applications. Finite samples doom instrumental variable method estimates to likely bias and inaccuracy. The lack of valid instruments clenches inferiority for instrumental variable method estimates. Given this, we should end what has become an almost automatic process of reviewers claiming to see endogeneity and authors responding with inappropriate instruments. Those opposed to this proposal should be encouraged to write a paper in which they (1) document significant endogeneity bias, (2) identify a strong and valid instrument, (3) provide economic justification for the instrument’s validity, and (4) demonstrate the ability of that instrument to remove endogeneity bias.

Field experiments

The wish to clarify causation has contributed to the field’s interest in causal models and endogeneity methods. As documented above, both structural models and endogeneity methods are hobbled by underlying assumptions which are likely to be violated in practice. Perhaps in response, there has been a rise in the number of field experiments published in marketing journals, and there are indications that some see field experiments as the new panacea. Before these indications grow into a demand that every strategy study be run as a field experiment, note that field experiments can only be used to evaluate a specific, tactical marketing options.Footnote 1 Chatterji, Findley, Jensen, Meier and Nielson (2014, pp. 7-8) explain that field experiments can’t be used to address strategic questions because strategy scholars “explain firm performance by variation in industry structure and firm capabilities” and one simply can’t “manipulate attributes of an entire industry … randomly [assign] firms to different competitive positions in the market … [or] randomly [change] the culture of half of the business units in a firm” while holding culture constant in the other half of the business units.

In short, field experiments cannot be used to address strategic issues. If the field moves to require experiment’s causal clarity for publication, then we will be trapped at the level of tactics, unable to contribute to the understanding of larger, strategic issues that drive firm value.

Conclusions

In conclusion, for marketing strategy research, regression models are more rigorous than structural models, endogeneity methods, or field experiments because regression’s assumptions are less likely to be violated than the assumptions of structural models or endogeneity methods and because field experiments cannot address firm-level, strategic questions.

To ensure rigor in research, every doctoral program should have a Methods Seminar covering all of the current methods, making clear the assumptions required for each, and discussing the situations in which a particular method’s assumptions are likely to be a better representation of practice than the assumptions of a regression model. Marketing strategy doctoral students and faculty wishing to do rigorous research should then select the method whose assumptions are most appropriate for the research question to be addressed. Should it happen that the most rigorous method for a particular research question is a regression, then researchers should also be rigorous in the discussion of findings. While such models may provide evidence that is consistent with causation, a regression model cannot show causation, and authors should not claim that it does.

In closing, it is rigor, not unearned imperial command, that should guide the researcher’s choice of research method and the review team’s evaluation of that method choice. Consistent with that, to raise the level of rigor of marketing strategy research, we should at least:

  1. 1.

    No longer reject reduced-form models because they are not structural.

  2. 2.

    Require structural models to report their goodness of fit and prediction relative to that of the relevant reduced-form model.

  3. 3.

    Require a reviewer to provide convincing proof that endogeneity bias actually exists before allowing a reviewer to accuse a model of endogeneity.

  4. 4.

    Call for a marketing application paper that documents significant endogeneity bias, identifies a strong and valid instrument, provides economic justification for the instrument’s validity, and demonstrates the ability of that instrument to remove endogeneity bias.

  5. 5.

    No longer reject regression models that address a strategic question because the data did not come from a field experiment.