Management research is often obsessed with novelty or originality. This is understandable and in line with most other academic disciplines. Novel findings are more interesting and can have a huge impact on management theory and practice. However, novelty as a sine qua non criterion is accompanied by serious threats to both the academic community and management practice. The academic community is increasingly concerned that many of these novel findings might be nonreplicable artifacts. Consequently, there is a plethora of results in the literature that might not constitute any knowledge, thus threatening the credibility and practical usefulness of management research. Replication studies, which we define as studies that put published empirical results to an additional empirical test, are needed for the discipline to develop in a meaningful way and close the theory practice gap.

To be fair, the claim that most published management research is false or nonreplicable is probably an exaggeration. However, without replication in particular, the social sciences (like management research) are vulnerable to overestimating effect sizes (Camerer et al. 2018). While in management research it is uncommon to conduct explicit replication studies, many original findings are replicated implicitly. These findings may be shown as control variables models only before the authors proceed to their own empirical contribution, such as a moderation effect based on the (replicated) direct effect. Nevertheless, more explicit replications would be useful to aid efforts in synthesizing and aggregating available knowledge and will help to put the academic discussion on a solid foundation of empirical evidence.

To improve this base of empirical evidence, Management Review Quarterly (MRQ) publishes structured literature reviews (Fisch and Block 2018), meta-analyses and, since 2018, replications. These instruments or submission types of evidence-based management build on each other. In particular, replication studies serve an important function in the academic discourse, as they are an indispensable ingredient needed to develop convincing, robust, and reliable structured literature reviews and quantitative meta-analyses.

The reluctance of management researchers towards replication stems from worries that editors and reviewers might not consider replications valuable contributions. For MRQ this is not the case; to the contrary, replications are an integral part of the journal’s efforts to achieve its goal of strengthening the empirical base of management research. Nevertheless, although the concept of replications is increasingly discussed in management research, we perceive that the concept is not yet sufficiently understood. Concrete information and tips on how to appropriately conduct and present a replication study in management research is rare (for notable exceptions, see Bettis et al. 2016; Easley et al. 2000; Madden and Dunn 2000; Kerr et al. 2016; Tsang and Kwan 1999). The goal of this editorial is to help close this gap and provide MRQ’s authors, readers and reviewers some guidance in this regard. Knowing that a short editorial cannot remove all uncertainties and misunderstandings, we particularly consider the following seven principles important when setting out to conceptualize a new replication study.

  1. 1.

    Understand that replication is not reproduction The idea of replication originates from the natural sciences. Once a research team presents new results that could inform our knowledge about the physical world, labs around the globe will start to repeat, for instance, laboratory experiments in exactly the same way as the original study conducted them. This is reproduction and is simply a goal that is largely impossible to achieve in a social science such as management or business research. Here, models can usually only explain a fraction of the variance in the dependent variable and hence can only illustrate quasi-laws on how human beings in organizations or organizations themselves behave. At best, replication in management research can thus produce only results that are essentially but not exactly similar to the results of the original study.

  2. 2.

    Aim to replicate published studies that are relevant Similar to any other research, replication studies have to convincingly answer the question of why someone should care about the results (the so what question). There are several ways to achieve this. Researchers can conclude relevance from follow-up work that was inspired by a seminal study that one plans to replicate or more generally from a growing body of literature on a particular management phenomenon. Such studies are most likely (but not necessarily) published in top tier management journals. A topic that draws rising attention from policy makers or management practice would be equally relevant. A touchstone of the suitability of a certain finding for a potential replication study could be a simple thought experiment: Is there a high probability that someone will conduct (or has already conducted) a structured literature review or a quantitative meta-analysis on the topic that the replication study could inform?

  3. 3.

    Try to replicate in a way that potentially enhances the generalizability of the original study While MRQ is open to replications that utilize the data of the original study and analyze it with different methods and specifications, we particularly encourage replication studies that enhance the generalizability of the original study. This goal can be achieved by collecting new data in additional institutional settings and contextual environments or through retesting models with an extended sample size (including the sample of the original study). In doing so, management research not only will have more empirical evidence available that justifies the original claims but also will benefit from the potential to enhance its theoretical scope. Thus, MRQ also considers ‘failed’ replications with nonsignificant results or results contrary to the original study to be within the scope of the journal (see principle 5 below).

  4. 4.

    Do not compromise on the quality of data and measures For a convincing replication study, it is of the utmost importance to present data that is of comparable or even higher quality than the data analyzed in the original study. Moreover, measures employed in the replication study should in the first step be comparable if not the same. To allow for a reasonable comparison between an original and follow-up study employing exactly the same or at least similar measures, it is important to avoid misinterpretations caused by measurement errors. In a second step, authors of replication studies can change the measures in a meaningful way to test the sensitivity of the results to the type of measures employed.

  5. 5.

    Nonsignificant findings are publishable but need explanation Finding support for the claims of an original study is very likely to be less astonishing than discovering no support, particularly if the original study is of as much relevance as described in the second principle of this editorial. Hence, nonsignificant results or ‘failed’ replications can be extremely important to further theory development. However, they need more information and explanation than ‘successful’ replications. Replication studies should account for this and include detailed comparison tables of the original and replicated results and an elaborate discussion of the differences and similarities between the studies. Authors need to make an effort to explain deviant findings. The differences might be due to different contextual environments from where the sample is drawn; the use of different, more appropriate measures; different statistical methods or simply a result of frequentist null-hypothesis testing where, by definition, false positives are possible (Kerr, 1998). In any case, authors should comment on these possibilities and take a clear stand.

  6. 6.

    Extensions are possible but not necessary Many published replication studies are presented as ‘a replication and extension’. One could assume that authors resort to this presentation strategy in order to avoid being criticized as being unoriginal and not offering a valuable contribution to the literature. MRQ does not deem such an approach necessary. It is equally interested in replications with or without extensions. Should authors decide to replicate and extend, it will however be necessary to limit the extension part to avoid creating something that could be confused with a ‘standard’ original study, which would not fall within the journal’s scope. A reasonable extension might be the inclusion of one, and only one, additional concept into a model that otherwise mimics the original study as closely as possible and, in doing so, enhances the explanatory power of the original model. Alternatively, authors might want to consider applying a new analytical approach after replicating the original design, e.g., by replicating a regression analysis first and extending the analysis by conducting an additional configurational analysis (Woodside, 2016).

  7. 7.

    Choose an appropriate format based on the replication approach Every replication will require a short discussion of the original study and the literature that followed from the replicated study. In doing so, authors of replications will create the necessary motivation and context. Depending on whether the results are presented as a straightforward replication or a more elaborate replication and extension, MRQ provides corresponding submission formats. Replication-only studies might be presented as shorter notes (of approximately 3000–5000 words), whereas replications including extensions will probably best presented as full papers (of approximately 5 000–8000 words). The author guidelines on the journal’s website provide more specific information in this regard.

Overall, and in line with its editorial mission, MRQ sees great potential in publishing more replication studies and will welcome more submissions in this regard. The present issue collects two exemplary replication studies that illustrate the variety of approaches that become possible when embracing the idea of replication and that can serve as inspiration for future authors.

Hopp et al.’s (2018) replication study is based on Honig and Karlsson’s (2004) paper on the antecedents and consequences of formal business planning. Unlike than most replications that aim to replicate empirical findings in a different setting (see principle 3 above), this is a narrow replication in the sense of Bettis et al. (2016), as it first corrects errors of the initial study that are due to the data collection mode of the Panel Study of Entrepreneurial Dynamics (PSED), and second, it is a quasi-replication, as it assesses the robustness of the original and corrected results by altering the measures that are employed to analyze the data. The results suggest that, contrary to the original study, formal business planning might influence firm survival, but the anteceding factors of planning need to be considered with great care. The paper thus clearly enhances our knowledge about the value of planning in entrepreneurship and thereby speaks to an important management debate.

Ozkan (2018) aims to replicate the paper by Aghion et al. (2004) about the relationship between R&D investments and financial structure. Using panel data from 177 Turkish manufacturing firms listed in Borsa İstanbul, the author was not able to find evidence for an inverted U-shaped relationship or for any effect of R&D intensity on the firms’ leverage ratios. The study ‘failed’ to replicate the results of Aghion et al. (2004) and finds nonsignificant effects. Importantly, the author provides a brief discussion of why the replication might have ‘failed’ (see principle 5 above). The study is an exemplary study in our effort to also publish nonsignificant findings or ‘failed’ replications.

In summary, we advocate for replications to become a normal part of the scientific process in business and management research. We encourage authors to submit their replications to MRQ following the above listed principles and tips. To facilitate replications of original studies and to enhance the reproducibility of empirical results in our field, we stress the importance of open access to all of the materials of published studies. This includes not only the empirical data itself but also other materials, such as the software, code and specific algorithms, used to produce the results of the original study.