Dear readers of Electronic Markets,

Most academic journals will have some kind of reference regarding the Journal Impact Factor (IF) in their mission statement or in their self-description. In fact, the IF has emerged as the key “currency” in academia and journals as well as researchers seem to focus more on the IF figure than on the insights in the respective journals or articles. In many discussions, it seems that the mere fact that a manuscript is accepted in an academic journal with a high IF is an end in itself. Many researchers only report on the number of publications in top-journals and are so impressed that a discussion on the findings of the research are no longer an issue in the discussions. A similar phenomenon may be observed when researchers report on the acquisition of projects: the mere amount of a grant strongly distracts from taking a closer look at the topics and the results of the project.

Obviously, currencies are standardized symbols for exchanging value among many actors. Similar to how money quantifies the value of the goods that are subject of a transaction, measures for academic research should represent a quantitative measure for the contribution of a journal or a specific article to scientific progress. In applying the same methodology to all scientific disciplines, the IF should provide orientation in the highly diversified academic world. While the absolute number of academic journals is difficult to determine, comprehensive research by Larson and von Ins (2010) concluded that “the number of serious scientific journals (…) most likely is about 24,000”. According to Ware and Mabe there were “28,100 active scholarly peer-reviewed journals in mid 2012” in the scientific, technical and medical market alone (Ware and Mabe 2012). Taking only the journals listed in the Social Science Citation Index (SSCI) yields some 3,217 journals for the various social science disciplines. Obviously, this still exceeds the ability of even knowledgeable scientists to assess the value of individual journals. In view of this plethora of journals, researchers from other fields, decision-makers in libraries or interested readers from outside the academic world value guidance on which journals to read, university librarians appreciate the guidance on which journals to purchase and academics value the orientation in qualification processes.

Navigating the jungle

A first development that may be observed is a well-known instrument from marketing. Brands are recognized as important in providing orientation to consumers especially in markets with an abundant variety of offerings. Brands are complex constructs that embody assumptions about the product, which make the decision process more efficient. This applies in particular to decision-makers with little domain knowledge. Important brands in the academic world are the journals and the publishing companies themselves. Research by Larivière et al. (2015) reports that “the top five most prolific publishers account for more than 50 % of all papers published in 2013. Disciplines of the social sciences have the highest level of concentration (70 % of papers from the top five publishers)”. The results also point to the growth since these top publishers (Reed-Elsevier, Wiley-Blackwell, Springer, Taylor & Francis) clearly increased their market share, especially “since the advent of the digital era (mid-1990s)”. Of course, the brand itself provides only a rough and often qualitative guidance.

The second development is more formalized, which is the emergence of quantitative metrics. Most commonly, the influence or impact of a journal is measured based on the number of citations its articles receive. The well-known Journal Impact Factor (IF) dates back to 1955, when Eugene Garfied, the founder of the Institute for Scientific Information (ISI), proposed a scientific citation index. Five years later, the Impact Factor was developed and applied for the first time to identify journals for the Science Citation Index (Garfield 1999). In 1992, ISI was acquired by Thomson Scientific & Healthcare and is now part of Thomson Reuters, a privately owned for-profit organization. Today, the IF is the most significant measuring instrument of citations and therefore the “principal measure by which research journals are evaluated” (Elliott 2014, 4).

Thomson Reuters calculates the IF based on three databases, which all belong to the Web of Science: the Science Citation Index for science, technology and medicine (STM), the SSCI, and the Arts and Humanities Citation Index (AHCI). So-called Journal Citation Reports (JCR) bi-annually report the number of citations for a journal and comprised 11,813 journals from over 2,550 publishers, approximately 232 disciplines and 83 countries in 2014 (Thomson Reuters 2014, 4). For the same year, the JCR Science Edition listed 8,659 journals and the JCR Social Science Edition some 3,154 journals. The IF for a journal is determined as follows (see Fig. 1): The denominator comprises the total number of its citable articles published in the two years prior to the IF year, i.e. for the 2015 IF the years 2013 and 2014 are relevant. The numerator contains the number of articles, which were cited in the reference period by other Web of Science journals (Garfield 1999, 979). Usually, the IF is then published in June following the index year, i.e. June 2016 for the example in Fig. 1.

Fig. 1
figure 1

Example of IF calculation for 2015

In addition to the standard two-year IF, the JCR also comprises additional measurements. Among them are a five-year IF based on a longer reference period, the half-life to determine the median age of the articles cited in a JCR year, the immediacy index that expresses the average number of citations per article in the reference year, the standard IF without self-citations as well as the Eigenfactor, which assesses the influence of the journal (number of citations of articles from the last five years in a JCR year without self-citations) (Thomson Reuters 2014).

Misdirections of the IF

As mentioned, the IF serves to navigate in the jungle of journals and is the basis for many decisions. More or less, the conception is that a higher IF also correlates with a “higher quality” and a “higher reputation” of the journal. Without saying, this assumption has several consequences: Subscriptions to journals with a higher IF might be preferred by scientific libraries with limited budgets. At the same time, scholars make an effort to publish their work in journals with a high IF to advance their career resulting in journals with high IF receiving more manuscripts than journals with a lower or without an IF. Also the recruiting of scholars, the evaluation of faculty, or the requirements for public funding of research projects may depend on articles published in journals having an IF (Dong et al. 2005, 8). The critical role of the IF in these decisions has led scholars to point at various shortcomings of the IF. Among them are:

  • First, by construction the IF neglects research in journals, which is not listed in the Web of Science and is dominated by English language journals. Thus, the IF represents only a minority of the entire journal market and, at least implicitly, makes the assumption that research outside the Web of Science does not create impact.

  • Second, the IF is calculated on a journal and not paper or author basis. On the one hand, this means that even articles published in high IF journals might receive no citations at all. On the other hand, a low number of highly cited papers may substantially raise the IF of the entire journal. In these cases, the IF fails as a reliable and precise measurement for the impact of the journal, since it only represents the small number of highly cited articles. For example, this effect is observed at Nature where 89 % of the 2004 IF (IF 2004: 32, IF 2014: 41) was generated by 25 % of articles (Nature 2005).

  • Third, articles are different in nature. Besides research articles, review articles or editorials are included as well in the IF. In addition, longer articles may receive more citations (Dong et al. 2005, 8). For instance, journals could manipulate their IF by publishing foremost review articles, which might attract far more citations than, for example, case studies. They could also decide to contribute content, which is not peer reviewed, such as editorial papers, invited papers, book reviews, technical notes or letters to the editor, but nevertheless counted for the IF.

  • Fourth, citation cartels may influence the IF. As suggested by George Franck (1999), groups of editors and journals could collaborate for mutual benefit and thus influence the positive development of each journal’s IF. Only little systematic research exists on this phenomenon, but Thomson Reuters is aware of citation stacking: According to the 2013 edition of JCR, 37 journals were added to the banned list for having “anomalous citation patterns”. This phrase refers to an excessive number of self-citations, but to “citation stacking” as well. Usually, journals remain on the banned list for two years.

  • Fifth, several additional effects were reported by Dong et al. (2005). For, example, the IF may be biased regarding differences of citation patterns among scientific disciplines. In dynamic fields, papers are published in shorter intervals and many are cited within the two-year period that is relevant for the IF. If, however, citation half-lives are higher, papers are not counted since they exceeded the two-year period. This does not mean that papers are not cited and without impact, it only occurs with a delay, and is not reflected in the IF.

Alternatives to the IF

In view of the deficiencies of the IF, several alternative metrics – so-called altmetrics – have been suggested. They partly address the downsides of the IF by defining additional measurements, but may also include composites that are linked to the IF or other indices.

A well-known measure is the h (Hirsch)-index, which was introduced by Jorge E. Hirsch in 2005 and aims to measure the productivity and citation impact of the publications of an individual researcher (Hirsch 2005). It considers his (or her) most cited papers and how often these were cited in other publications. Compared to the IF, the h-index is not specific to a journal, which includes work from various authors, but a closer measure of an individual’s output quality. However, the h-index can also be calculated for a journal. The h-index is not skewed by one well-cited, influential paper, and is calculated on more databases, such as Scopus, the Web of Science and Google Scholar. Google’s h-index is determined on a five-year basis, i.e. if a journal has published ten articles, which were cited at least ten times, it receives an h5 index of ten. Besides Google Scholar several tools are available to determine the h-index. Well-known is the “Publish or Perish” tool developed by Anne-Wil Harzing, which calculates the h-index as well as further indices for authors as well as for journals (Harzing 2016).

Taking the example of Electronic Markets (EM), Harzing’s h25-index yields 68 for EM, meaning that 68 of all articles published by EM in the last 25 years have been cited at least 68 times each. As of March 2016 the strongest referenced article was cited 2,861 times. Surprisingly, the h-indices for EM varied depending on the sources. Contrary to Harzing’s score of 68, Google’s h5-index for EM was 15, and the h-index listed in the Web of Science was 10. The latter is even lower than Google’s h5 although it covers a slightly longer time period. An explanation for the differences is probably the higher diversity of publications that Google takes into account. While it includes all publications online, the web of knowledge only covers items listed in the IF and “books and articles in non-covered journals are not included” (Thomas Reuters 2016). As EM was added to the SSCI in 2009, all prior papers and citations are not considered. When comparing the h-indices of journals or authors it is important to assure that each index is based on the same parameters. However, this is not easy since the calculus is not always transparent.

The example points to other downsides of the h-index. Since it fails to distinguish between the nature of references, a highly-referenced paper may also be based on negative references. Furthermore, the h-index ignores the number of authors as well as their position in the author list on a paper and it is based on data from the past and therefore may not be a valid predictor of the author’s future performance. Of course, some of these shortcomings equally apply for the IF. In particular, the Google Scholar metrics are also vulnerable to citation cartels (Davis 2014). An experiment conducted by Delgado López-Cózar et al. (2012) revealed how several false (“fake”) articles significantly increased the authors’ and journals’ h-index.

Besides the h-index, a variety of other indices exist based on the Scopus database that is provided by Elsevier. For example, the SCImago Journal Rank (SJR) is operated by a research group in Spain (Gonzales-Pereira et al. 2010) and applies a weighted citation score, i.e. citations from a prestigious journal are scored higher than those from a title with a smaller citation network. The algorithm works similarly to PageRank and, thus, is not a direct alternative to the IF, which counts single citations. Another metric is the Source Normalized Impact per Paper (SNIP) (Moed 2010). It aims at correcting differences in publications among disciplines and weighs citations to a journal based on the number of citations in that field. Finally, a regional journal ranking is conducted by the German Academic Association for Business Research (VHB) via periodic polls. All members of VHB are asked to rate scholarly journals based on their personal experiences leading to the so-called Jourqual-ranking. It represents the perception of a journal’s academic quality in the German-speaking academic community.

Assessing the indices

The altmetrics discussion illustrates that academia is still in search of assessment criteria and metrics that adequately suit the various requirements. For example, a recent issue of the Journal of Tourism Management dedicated a special issue to “Journal Impact Factors” to encourage an “open debate around a subject that has become prominent across the Sciences and Social Sciences” (Ryan and Page 2015, 1). Without doubt, this is an important discussion since the altmetrics also present the same and/or other shortcomings than the established IF. Any new metric has to tackle the challenge of reducing journal quality as an aggregated phenomenon to a single figure. Existing as well as new metrics require careful scrutiny and should be understood before applying them in decision-making. Taking up the analogy of the currency, these symbols are standards that may be assessed regarding three dimensions (Huber et al. 2001, 213).

  • First, it is important to determine the object of standardization. Indices may apply to articles, authors, to groups of authors (e.g. departments, organizations) as well as to journals. A key question after determining the object itself refers to how it is measured. The discussions above have shown that some indices are calculated based on databases that only consider qualified publications, such as from journals listed in the IF cited in other IF publications within a certain period of time.

  • Second, the standardization body indicates the institution responsible for calculating and distributing a metric or a set of metrics. The possibilities include private organizations (e.g. Thomson Reuters, Google), federal organizations (e.g. United Nations, National Science Foundation) or non-governmental (user) organizations (e.g. VHB, SCImago). Clearly, the more commercial interests are linked with the institution responsible for the metric, the more attention should be given to the opportunistic interests of providing the metric.

  • Third, the standardization community denotes where a standard is (or should be) used. This area may be geographical and organizational in nature. The geographical scope describes whether a metric is focused on a specific nation, multiple nations (e.g. Jourqual for the German-speaking countries) or whether it is global in nature. The organizational scope defines whether the metric is intended for a single organization (e.g. a university defining its own metric) or for multiple organizations. Although some academic institutions acknowledge the SJR as well as the SNIP, the IF is by far more widely accepted throughout the different scientific communities.

In summary, all available metrics should be understood by their individual users. As long as alternative metrics have not reached a similar diffusion as the IF, this metric will almost automatically appear in decision-making processes. However, all individuals need to be aware that the indices aim to measure a phenomenon that is highly qualitative in nature and that they are applied to support diverse decisions. It should be emphasized that metrics are intended to support, but not to replace the decisions and that the work of researchers should be of interest as well. Some institutions have already composed an individual set of metrics, i.e. a combination of IF, h-index and SJR, depending on their purposes. Clearly, relying on the IF or any other metric to assess a journal or an author will inadequately reduce the problem and also fade out information that is important to truly reflect the author’s impact.

These reflections on the IF conclude a series of three editorial papers on academic publishing, the first having been on reviewing (Alt et al. 2015) and the second on self-archiving (Alt et al. 2016). However, the discussions in this dynamic field are far from completed and we are looking forward to future submissions and publications in a special issue that is currently awaiting submissions on the “Transformation of the academic publishing market”. It is organized by Stefan Klein, Bozena Mierzejewska and Diego Ponte.

General research issue

The present issue includes a selection of six general research articles. In the first paper, titled "Determining profit-optimizing return policies - A two-step approach on data from Taobao.com”, the authors Wenyan Zhou and Oliver Hinz analyze an optimal return policy in e-commerce considering the potential positive effect on sales as well as the potential negative effect of higher costs based on data from the biggest Chinese Online platform (Zhou and Hinz 2016).

The second article “Development of a maturity model for electronic invoice processes” was authored by Angelica Cuylen, Lubov Kosch, and Michael H. Breitner and provides interesting insights into the digitalization of invoice processes (Cuylen et al. 2016).

It is followed by the paper of Netsanet Haile and Jörn Altmann, who present a “Structural analysis of value creation in software service platforms”. System usability, service variety, and personal connectivity are identified as major determinants that contribute to the value offered to users on mobile software service platforms (Haile and Altmann 2016).

In the fourth paper titled “The ambiguous identifier clustering technique” the authors Michael Scholz, Markus Franz, and Oliver Hinz propose the Ambiguous Identifier Clustering Technique (AICT) that identifies online transaction data, which refer to virtually the same product (Scholz et al. 2016).

The fifth paper “Constructing online switching barriers: examining the effects of switching costs and alternative attractiveness on e-store loyalty in online pure-play retailers” is authored by Ezlika Ghazali, Bang Nguyen, Dilip S. Mutum, and Amrul Asraf Mohd-Any. They investigate the impact of switching barriers on customer retention (i.e. e-store loyalty) and further analyze the moderating effects of switching costs and alternative attractiveness (Ghazali et al. 2016).

Finally, this issue comprises the position paper by Shahriar Akter and Samuel Fosso Wamba on “Big data analytics in e-commerce: A systematic review and agenda for future research”. This paper is available as open access and presents an interpretive framework that explores the definitional aspects, distinctive characteristics, types, business value and challenges of Big Data Analytics in the e-commerce landscape (Akter and Fosso Wamba 2016). It is a valuable contribution towards a foundation of the emerging field of big data.

We hope that you find reading these articles interesting and that they are helpful in your research.

Electronic Markets 2015 awards

One indication to know which papers have been seen helpful is again by looking at the citation and the download figures. The Editorial Office has determined these figures for all 24 papers published in 2014 to nominate them for the Paper of the Year Award. In addition, qualitativ criteria were considered as well. In particular, the Senior Editors of EM voted for the two winning papers, taking the papers’ fit concerning EM’s scope, rigor, relevance and contribution to theory as well as practice into consideration. EM’s two Papers of the Year 2015 are:

“Converting freemium customers from free to premium - the role of the perceived premium fit in the case of music as a service” by Thomas M. Wagner, Alexander Benlian and Thomas Hess  (Wagner et al. 2014)

“Designing for mobile value co-creation—the case of travel counselling” by Susanne Schmidt-Rauch and Gerhard Schwabe. (Schmidt-Rauch and Schwabe 2014)

We are also proud to announce that four reviewers were selected for the Electronic Markets’ 2015 Reviewer of the Year Award:

Volker Bach from Steinbeis University Berlin, Germany

Ulrike Baumöl from Fernuniversität Hagen, Germany

Steven Bellman from Murdoch University, Australia

Martin Smits from Tilburg School of Economics and Management, The Netherlands

We congratulate the authors of the two winning papers and our four Reviewers of the Year! In addition, we would like to take the opportunity to thank all authors, reviewers and editors involved in the papers of 2015! A list of the reviewers that contributed their work in 2015 is attached below. We highly appreciate your effort!

Best regards from Leipzig and St. Gallen,

Rainer Alt, Carsta Militzer-Horstmann, Hans-Dieter Zimmermann

EM reviewers 2015

Babak Abedin

Lailatul Faizah Abu Hassan

Abubakar Mohammed Abubakar

Jasser Al-Kassab

Rainer Alt

Jörn Altmann

Amílcar Arantes

Irit Askira Gelman

Marjan Aslanzadeh

Volker Bach

Tong Bao

Christine Bauer

Ulrike Baumöl

Steven Bellman

Markus Bick

Ivo Blohm

Flavio Bonfatti

Harry Bouwman

Christoph Buck

Shui Lien Chen

Vincent Cho

Ricardo Colomo-Palacios

Anastasia Constantelou

Ioanna Constantiou

Sharon Dawes

Mark De Reuver

Ana Rosa del Aquila Obra

Patrick Delfmann

Yi Ding

Nestor Duch-Brown

Alea Fairchild

Sajad Fayezi

Julia Fehrer

Kai Fischbach

Jerry Fjermestad

Samuel Fosso Wamba

Judith Gebauer

Denise Gengatharen

George Giaglis

Peter Gomber

Jörn Grahl

Ully Gretzel

Jingzhi Guo

Desiderio Gutiérrez Taño

Netsanet Haile

Shengnan Han

Robert Harmon

Thomas Hess

Wout Hofman

John Hopkins

Reimer Ivang

Reinhard Jung

Kaveh Kaleghi

Achim Karduck

Stephan Karpischek

Panagiotis Kassianidis

Stefan Klein

Bram Klievink

Benn Konsynski

Chulmo Koo

Hanna Krasnova

Helmut Krcmar

Sherah Kurnia

Rob Law

Ulrike Lechner

Zach Lee

Ho Lee

Christine Legner

Vili Lehdonvirta

Tobias Lehmkuhl

Christiane Lehrer

Uwe Leimstoll

W. Marc Lim

Dennis Linders

Lili Liu

Zhiyong Liu

Euripidis Loukis

André Ludwig

Wolfgang Maass

Maria Madlberger

Christian Maier

Mahbubul Md. Alam

Qingfei Min

Martin Mocker

Kyle Murray

Thomas Myrach

Marco Nierlich

Shahrokh Nikou

Guido Ongena

Sven Overhage

Craig Parker

Mogens Kuehn Pedersen

Alexander Pflaum

Pattarawan Prasarnphanich

Andreja Pucihar

Thomas Puschmann

Mª Soledad Ramírez Montoya

Thierry Rayna

Sven Reinecke

Olaf Reinhold

James Richard

Alexander Rossmann

Mary Luz Sánchez-Gordón

Ansgar Scherp

Theresa Schmiedel

Detlef Schoder

Guido Schryen

Kim Serota

Bingqing Shen

Martin Smits

Stefan Smolnik

Katarina Stanoevska-Slobeva

Stefan Stieglitz

Ali Sunyaev

Reima Suomi

Samar Swaid

Barney Tan

Huay Ling Tay

Frank Teuteberg

Frédéric Thiesse

Luba Torlina

Iis Tussyadiah

Virpi Tuunainen

Nils Urbach

Madalina Vătămănescu

Johan Versendaal

Pavlos Vlachos

Adam Vrechopoulos

Yun Wan

Hannes Werthner

Rolf Wigand

James Wolf

Phil Xiang

Jun Yang

Markus Zanker

Lei Zheng

Hans-Dieter Zimmermann

Robert Zinko