1 Introduction

An important outcome of the BISE editorial board meeting in early 2014 was the decision to restructure the editorial board into seven new departments in order to better address the diversity of BISE research communities. This was largely driven by the experience that BISE covers an increasingly broad spectrum of research domains which makes it increasingly challenging to manage its diversity while maintaining a high level of quality.

Almost one year later, we dedicate this editorial to discuss the issues of quality and quality management in a research and publication landscape which has been undergoing significant changes. While we focus primarily on the journal, some of the discussion goes beyond it and could be relevant for the entire BISE community. In this editorial, our intention is to initiate a more profound discussion about quality standards within the BISE departments. Along this vein, we focus on principles which we maintain across departments to cope with the tradeoff between diversity and quality. While we want to provide a platform for all major BISE research communities, we would also like to emphasize the commonalities between them. It is not our intention to create bureaucratic overheads. Instead, we would like to propose standards which will help us to provide tangible research results addressing relevant questions of the corporate world and our society.

The understanding of research quality and the existing practices of quality management (including communication of research results) have led to highly controversial debates. No doubt, the scientific methods applied and the quality of research can never be seen independent of the respective research question(s). Specific research questions call for a formal treatment while others require empirical examination and others again attempt to develop novel artifacts in order to solve a specific problem. Nevertheless, there are fundamental scientific principles such as verifiability, reproducibility, and generalizability which are paramount in all disciplines.

Since it is almost impossible to cover the topic of research quality comprehensively in an editorial, we would like to highlight some aspects which we have found to be of particular importance in order to promote rigor and relevance in our discipline.

2 On the Assessment of Quality in Research

The quality of research results has been and will be continuously assessed in various settings and on different occasions. For instance, journals and conferences evaluate research quality when selecting a paper, and similar assessments take place when research grants are awarded or research projects are proposed. Typical examples for assessment criteria in research are according to Rosemann et al. (2010):

  • theoretical foundation (if relevant),

  • rigor of research methods applied,

  • importance of results as a basis for future research (as seen by scholars),

  • importance of results for practical utilization,

  • thematic relevance as seen from the publishing outlet (journal, special focus issue, conference(-track)),

  • originality of the research paper,

  • presentation clarity of the research results, and

  • presumed appeal of the paper to the relevant audience.

Individual and criteria specific evaluations are often aggregated, leading to a final acceptance decision of a research paper or research project.

Empirical studies indicate that individual assessments have very different impact on the overall evaluation. Rosemann et al. (2010) show that “for ECIS 2007, four of six review criteria significantly influenced the acceptance/rejection decision: “Significance/contribution,” “Theoretical strength,” “Presentation,” and “Appeal to audience.” For BPM 2007, just two of five review criteria, “Originality” and “Technical soundness,” were significantly associated with the acceptance/rejection decision. Finally, for ER 2007, we found that all review criteria with the exception of “Presentation” significantly influenced the acceptance/rejection decision.” (Rosemann et al. 2010, p. 11). In connection with specific criteria and acceptance decisions, the diversity challenge with respect to quality assessment appears to be even stronger: “two of seven ECIS evaluation criteria are significant predictors of the acceptance/rejection decision, two of six for BPM, and one of six only for ER”. Differences in the evaluations can also be seen in scientific journals.

The evaluation of research quality always depends on a number of criteria and some level of inter-subjective judgment by editorial boards or program committees, on the research paradigms followed by a community, the type of research question, the expectations of the involved institutions (e.g., universities, but also associations), and last but not least on personal preferences. These factors do not only interact in a complex way, but in many disciplines undergo changes over time. One example is provided by the recent paper by the Association to Advance Collegiate Schools of Business (AACSB), which is now arguing that academic business research should additionally focus on “impact” and not solely on “rigor” which used to be the dominant requirement in past years (AACSB 2012).

These different requirements are not surprising whilst considering that business research in general and BISE research in particular are fairly young disciplines. In particular, the latter has developed a much broader range of research topics and paradigms during the past three decades (Bichler 2014). This phenomenon, however, is not limited to our discipline.

3 Decentralization with Common Standards

The considerations outlined above were important reasons to transform our monolithic editorial board structure into a department structure some years ago. A department structure should better address the specificities of a particular research community and the research problems and methods prevailing in this area. The increased number of submissions in the last year has attracted new authors who are submitting their work to the journal, and we are confident that the broader reach has been supported by the new department structure at least to some extent. While there is an ongoing profound discussion about specific and particular quality standards within our departments, we have developed standards to maintain quality across our departments:

  • In all departments we aim for research results which are original, verifiable, reproducible, and generalizable.

  • We ask for meaningful reviews from experts in the field of subject.

  • We try to detect and mitigate conflicts of interest.

  • We take into account the innovation potential and other positive side effects of a research project.

Every submission and review process, of course, raises new questions on how these general principles can be interpreted and instantiated. But it is exactly this constructive debate we would like to initiate.

Beyond the relatively well-defined and enforceable “governance” of a journal, the heterogeneity of the discipline also creates challenges for the BISE research community:

  1. 1.

    Reviews by different people with different apprehensions, even within a research community, often yield different results. The more diversity these assessments create, the more difficult it is for authors to meet the requirements of all experts. Therefore, it may occur that high potential submissions may be rejected if the decision depends on the average overall rating or if negative reviews are over-emphasized. The responsibility of the associate and department editors is to ensure a balanced and thoughtful evaluation process. In fact, we consider this is as essential for our journal. It is one of the central responsibilities of editors in a journal with high quality standards to continuously maintain balanced and thoughtful review process. Beyond the scope of the journal, the diversity of research paradigms and, thus, the diversity of quality standards may result in inhibiting our community’s potential for journal submissions and grant applications which require adherence to particular doctrines. We are competing with other disciplines for visibility and research funding, and diversity might lead others to view our discipline as “small” and “fragmented” – or even “insignificant” or “unsustainable”.

  2. 2.

    Diversity fosters the tendency towards “schools of thought” which are likely to offer incommensurable assumptions and which are critical of each other. Examples are disputes on methods in the context of the emergence of business administration in the early 20th century (science vs. clinical practice) or in the context of early internationalization in BISE (behavioral vs. design paradigm). A negative result may be that some schools prevail in certain contexts, thus making progressive discussions almost impossible and aggravating the fragmentation of our community.

4 Common Quality Standards Require Open and Constructive Discussions

It is necessary, therefore, to have a debate on common quality standards and to develop an attitude of pluralism for differing quality perceptions in different research communities. Examples for contributions to such a discussion are the assessment dimensions of scientific methods (abstraction, originality, and substantiation) formulated by Frank (2006), the principles for scientific substantiations (transparency, varying concepts of truth such as correspondence, coherence, and consensus), or criteria of scientific conduct such as verifiability, derivation accuracy, consistency, and clarity (see Heinrich et al. 2010, pp. 50–54). These quality criteria do not only go beyond schools of thought but also beyond disciplines, thus enabling more constructiveness and coherence of quality management, which are key features of BISE, in diverse research paradigms and research communities.

Looking at the challenges in quality management of a diverse scientific community as well as at the impact on the core activities of each researcher, the convergence of a quality understanding in business and information systems engineering appears to be a substantial challenge.

We are convinced that it is feasible and highly beneficial to move into this direction. Changes must be broadly institutionalized. In a largely self-organizing community of researchers, regulation can only accomplish this to a limited extent. Thus, for example, in the context of architecture management, Weiss et al. (2013) pointed out that social legitimacy, efficiency, organizational establishment, and trust have a significant effect on the actors’ response to restrictions of their freedom of design, if the actors are not forced to follow architectural specifications. With respect to a common understanding of research quality in an academic discipline, this means that, in addition to a long lasting perspective, various supporting measures need to be adapted to the character of a research community. Examples of such measures in the areas of social legitimacy, efficiency, organizational establishment, and trust are (a) that compliant behavior should be reflected in the social status of researchers (such as awards for reviewers), (b) desired practices should be formulated and established in the main artifacts of the discipline (charters), and/or (c) trust should be explicitly created that decision makers (e.g., editors, conference organizers, spokespersons of community organs) actually apply desired practices.

In the BISE journal, for example, we facilitate these “virtues” by considering “good” reviews (i.e., reviews applying the understanding of quality agreed on) more influential in the acceptance decision of articles, by rewarding “good” reviewers (i.e., those writing constructive and substantial reports) with the “Outstanding Reviewer Award”, and by supporting meritocratic promotion principles in the editorial board. However, this also requires a common understanding and shared vision in the editorial board as well as in the entire discipline for a rotation of responsibilities and scheduled termination of positions to make room for promotions.

In the future, we would like to improve the interaction between BISE departments. On the one hand, this should be achieved by the joint discussion of quality standards (requirements for the level of formalization, requirements for empirical work, etc.). On the other hand, diverse content will be discussed in joint special issues or on conferences of multiple departments. All BISE sub-communities are important for a comprehensive understanding of information systems in today’s economy and society irrespective of the heterogeneity of topics and approaches in the individual domains of BISE. Important innovations are to be expected especially at the boundaries between sub-communities and at the boundaries to other disciplines.

Guidelines for the quality of academic work are especially important in a diverse discipline. Let us constructively continue this discussion in order to further develop our discipline and let us provide diligent guidance to our younger scholars in order to develop and maintain remarkable future options for the discipline, our corporate world, and our society.