Though citations are critical for communicating science and evaluating scholarly success, properties unrelated to the quality of the work—such as cognitive biases—can influence citation decisions. The primacy effect, in particular, is relevant to lists, which for in-text citations could result in citations earlier in the list receiving more attention than those later in the list. Therefore, how citations are ordered could influence which citations receive the most attention. Using a sample of 150,000 articles, we tested whether alphabetizing in-text citations biases readers into citing more often articles with first authors whose surnames begin with letters early in the alphabet. We found that surnames earlier in the alphabet were cited more often than those later in the alphabet when journals ordered citations alphabetically compared with chronologically or numerically. This effect seemed to be stronger in psychology journals (which have a culture of alphabetizing citations) compared with biology or geoscience journals (which primarily order chronologically or numerically) and was strongest among moderately and highly cited articles. Therefore, alphabetizing in-text citations biases citation decisions toward authors with surnames occurring early in the alphabet. These citation decisions result from an interaction between cognitive biases (more attention devoted to items earlier in a list) and the structure of the citation environment (the style in which citations are ordered). We suggest that journals using alphabetically ordered citations switch to chronological ordering to minimize this arbitrary alphabetical citation bias.
In a classic 1975 article, Gerard Salton proposed a model of information retrieval cited by hundreds of other articles since. The problem is, the Salton article does not exist; rather, it is an amalgam of two other citations, neither of which actually proposed the model under question (Dubin, 2004). For decades, researchers have simply copied the citations used by other articles. Unfortunately, such a lackadaisical approach to citation behavior is not uncommon. And, critically, the resulting citation counts are used as key metrics by employers and funding agencies to assess productivity and success of individual scholars and their institutions. Therefore, understanding the psychology of citation decisions has important implications for how scholarly work is evaluated. Here, we investigate how different in-text citation styles interact with cognition to bias citation decisions.
Herbert Simon’s notion of bounded rationality emphasizes that decision makers face limits in time, information, and cognitive capabilities when making decisions (Simon, 1955). When writing scholarly articles and books, for example, researchers make citation decisions (which articles to cite) under time pressure and constraints on attention and memory. However, Simon’s notion of bounded rationality highlighted that, to understand decision making, one must investigate both the cognitive capabilities of the decision maker, as well as the structure of the environment in which those decisions are made (Simon, 1990). This is a useful framework for understanding how the decision environment interacts with researcher cognition to shape citation decisions.
Cognitive biases: The primacy effect
The order in which we process information influences our use of that information. Our attention and memory are subject to the primacy effect, in which we attend to and remember earlier items in a list better than late items (Bigham, 1894; Murdock, 1962). Here, we propose that primacy effects also occur when readers acquire and use information about citations in scholarly articles. For example, limited time and attention could drive researchers to focus more on citations earlier in lists than those later in lists, which can bias them toward using these citations more in their own work. Indeed, articles appearing earlier in a journal’s table of contents or e-mail announcements are viewed and cited more frequently than those appearing later (Berger, 2016; Feenberg, Ganguli, Gaulé, & Gruber, 2017). Further, authors with surnames (last names) starting with letters earlier in the alphabet are cited disproportionately more than authors with surnames later in the alphabet (Huang, 2015). This alphabetical citation bias is an example of the primacy effect: Authors with letters earlier in the alphabet receive more attention than those with letters later in the alphabet. Huang suggested that this bias occurred because researchers scan through alphabetized references lists (bibliographies at the end of an article) but stop before searching the entire list, which results in citing references early in the list. Here, we propose that the environment under which researchers find and subsequently cite articles (the “citation environment”) influences which articles are cited. That is, the alphabetical citation bias arises from the primacy effect interacting with the in-text citation style dictated by each journal.
The environment: Citation styles
Journals vary in how authors cite previous work within the text (in-text citations). The two broad classes of citation styles for in-text citations (Williams, 2011) are Harvard style—which includes a list of author surnames and publication dates, as used in the American Psychological Association (APA) style—and Vancouver (or numerical) style—which inserts a unique number for each citation (e.g., ). Within the Harvard style, when multiple references are cited, in-text citations can be ordered chronologically with the earliest citation listed first, or alphabetically based on first author surname. Both the alphabetical and chronological Harvard styles use the same alphabetized reference list, the only difference between the two being the ordering of in-text citations. Numerical journals, in contrast, typically organize their reference lists in sequential numerical order, with references appearing in the order in which they were first used in text.
Boundedly rational citation decisions
Huang (2015) postulated that the alphabetical citation bias arises because readers only perform a limited search within reference lists. We propose an alternative—though not mutually exclusive—mechanism that can generate the alphabetical citation bias: Alphabetizing the ordering of in-text citations combines with the primacy effect to result in this bias. That is, researchers’ cognitive processes (more attention devoted to items earlier in a list) interacts with the citation environment (citation style) to influence citation decisions.
Huang’s (2015) hypothesis that reference lists drive the alphabetical citation bias would predict that both alphabetical and chronological styles should cause a bias since these styles share an alphabetized reference list. Alternatively, if the alphabetical citation bias arises due to the order of authors in an in-text citation of multiple sources, then journals utilizing an alphabetical, but not chronological nor numerical, style will cause a bias because readers focus on citations early in the list of citations. Thus, we can test whether the alphabetical citation bias arises due to reference list and/or in-text citation ordering. We a priori predicted that letters early in the alphabet would have a greater number of citations for journals with alphabetized in-text citations (alphabetical style), compared with chronological and numerical style. That is, we hypothesized that the alphabetical citation bias would be strongest when in-text citations are ordered alphabetically.
Because fields of study have different cultures for citation styles, our second question addresses whether fields with a culture of alphabetizing in-text citations show a stronger bias than fields that do not typically alphabetize them. Psychology, for example, is rather homogeneous in its use of alphabetical ordering, dictated by the APA publication manual (American Psychological Association, 2010).Footnote 1 Citations in biology, in contrast, are heterogeneous with no overarching citation style, though numerical and chronological are quite common (Williams, 2011). The citation culture difference in psychology and biology provides an opportunity to test the alphabetical citation bias across fields. That is, assuming that researchers tend to cite other articles from within their own field, we can test whether an alphabetical citation bias emerges more for fields with a culture of alphabetizing in-text citations. Therefore, we made the a priori prediction that early-letter surnames will have higher citation rates compared with late-letter surnames in psychology but not in fields without a strong culture of alphabetical in-text citations.
To select psychology journals for this analysis, we conducted an InCitesTM Journal Citations Report® (http://about.jcr.incites.thomsonreuters.com/) using “Psychology” and “Psychology: Multidisciplinary” search terms. We then selected the top seven journals from that list that encompassed general psychology (i.e., not journals with a specific subfield focus). We conducted a similar report using “Biology,” “Ecology,” and “Evolutionary Biology” search terms and selected the top seven general biology/ecology journals. Because Trends in Ecology and Evolution appeared in this list, we added the sister journal Trends in Cognitive Sciences to the psychology journals (which was not in the top seven psychology journals). This yielded 15 general psychology and biology journals. We also wanted to include subfield journals, so we added 12 major journals associated with animal behavior and cognition, since this subfield bridges psychology and biology, and its journals include multiple citation styles. This yielded a total of 27 journals (see Table S1 in the Supplementary Materials). We coded each journal as primarily a psychology or biology journal,Footnote 2 and we coded the in-text citation type as alphabetical, chronological, or numerical. Six of the journals switched citation types during the study period, so we coded the journals by year to accommodate the switches.Footnote 3 For each journal, we downloaded from Web of ScienceTM (https://apps.webofknowledge.com) citation information from all articles published between 2000 and 2015, including first author surnames and the citation count for each article (as of June 2016), resulting in 46,789 articles with citation counts. We kept only articles that had first author surnames beginning with uppercase letters (i.e., resulting in the removal of surnames starting with de, van, von, etc.).
To measure citation rate, we extracted the number of times that each of the articles was cited in Web of ScienceTM and calculated the mean number of citations for each letter. We then divided the mean citation rate for each letter by the total number of citations and multiplied by 100 to calculate the mean citation percentage for each letter.
We calculated this measure for two main analyses. In the first, we addressed our prediction that citation style influences citation rate by partitioning articles by citation style (alphabetical, chronological, or numerical), then calculating citation rate independently for each style. To address our prediction that fields with a culture of using an alphabetical citation style will show a stronger bias, we calculated citation rates partitioned by field (biology or psychology). Second, to confirm and further strengthen our findings, we replicated our results using an independent and larger sample comparing alphabetical-style psychology journals with chronological-style geoscience journals.
We conducted an independent replication of this analysis that we preregistered at AsPredicted.org before collecting data (https://aspredicted.org/um2sk.pdf; see Supplementary Materials). In April 2017, we downloaded citation information from Web of ScienceTM for all 50,945 articles in APA and Psychonomic Society journals (N = 54) that were not included in the first analysis and included more than 100 articles between 2000 and 2015 (see Table S2 in the Supplementary Materials). We also downloaded 49,304 articles from the primary geoscience societies’ journals (N = 31) that included more than 100 articles during the same time period (see Table S2). We conducted the same data analysis used on the first data set to replicate our initial results.
We calculated Kendall’s τ for our correlation coefficient because surname letter is an ordinal variable. In addition, we calculated Bayes factors (BFs) to provide the weight of evidence for the alternative hypothesis relative to the null hypothesis (Wagenmakers, 2007). Bayes factors between 3 and 10 provide moderate evidence for the alternative hypothesis, those between 10 and 30 provide strong evidence, those between 30 and 100 provide very strong evidence, and those above 100 provide extreme evidence (Wagenmakers et al., 2018). Reciprocal values (1/3, 1/10, 1/30, 1/100) provide comparable evidence for the null hypothesis. Bayes factors and credible intervals (in brackets) for Kendall’s τ were calculated based on van Doorn, Ly, Marsman, and Wagenmakers (2018). Bayes factors for linear regressions were computed using weakly informative priors (Rouder & Morey, 2012). Bayes factors for generalized linear mixed models were converted from Bayesian information criterion (BIC) using BF = e(BICnull− BICalternative)/ 2 (Wagenmakers, 2007).
We analysed the data using R statistical software Version 3.5.1 (R Core Team, 2018). Data, R code, and additional analysis details are available in the Supplementary Materials and on the Open Science Framework (https://osf.io/q5xk9/).
When partitioned according to citation style, in journals with alphabetical in-text citations, the citation rate moderately decreased across letters (τ = −0.34 [−0.56, −0.06], BF = 4.4), whereas journals with chronological (τ = −0.18 [−0.41, 0.09], BF = 0.54) or numerical (τ = −0.20 [−0.43, 0.07], BF = 0.67) in-text citations showed no evidence of a relationship (see Fig. 1a). However, comparing Bayesian linear regression models with and without the citation style by letter interaction did not show evidence for a difference between the two models (BF = 0.34). Therefore, we have weak to moderate evidence that alphabetical in-text citations resulted in a bias toward citing authors with surnames earlier in the alphabet more than those with surnames later in the alphabet.
Categorizing the articles by field showed that the citation rate in psychology very strongly decreased with the letter of the first author’s surname (τ = −0.49 [−0.68, −0.19], BF = 88.1; see Fig. 1b), whereas biology showed no correlation (τ = −0.10 [−0.34, 0.16], BF = 0.32). However, comparing models with and without the field by letter interaction showed only weak support for difference between the two models (BF = 2.6). Thus, in psychology, articles written by first authors with surnames earlier in the alphabet were more likely to be cited than those with surnames later in the alphabet. This effect was not observed in biology, though there was only weak evidence for a difference between fields (see Figs. S1 and S2 in the Supplementary Materials for individual journal data).
The comparison of psychology and biology journals resulted in weak to moderate effects of the alphabetical citation bias. The data, however, were noisy, so we replicated our analysis with an independent sample of psychology journals, a new chronologically ordered field for comparison (geoscience), and an increased sample size of articles for each field.
Citation rate in psychology strongly decreased with the letter of the author’s surname (τ = −0.40 [−0.60, −0.11], BF = 12.0), whereas geoscience showed evidence of no relationship (τ = 0.05 [−0.20, 0.30], BF = 0.27) (see Fig. 2; see Figs. S3 and S4 in the Supplementary Materials for individual journal data). There was stronger evidence for the model that included the field by letter interaction compared with the model without the interaction (BF = 11.1). Thus, an independent replication with a larger sample size and different comparison field corroborated our original findings: An alphabetical citation bias exists in psychology but not in fields that primarily use a chronological citation style.
We propose that the mechanism generating the alphabetical citation bias involves a primacy effect that takes place when psychologists observe a list of in-text citations, thereby biasing citation decisions toward earlier citations. However, this mechanism requires that articles be cited at least once to appear in an in-text citation. Therefore, we predict that the alphabetical citation bias should not influence the probability of whether an article is cited at all. In an exploratory analysis, we calculated the mean probability of being cited and found no evidence for or against an alphabetical citation bias in psychology (Data Set 1: τ = 0.20 [−0.07, 0.43], BF = 0.67; Data Set 2: τ = −0.24 [−0.47, 0.03], BF = 1.1; see Fig. S5 in the Supplementary Materials). Thus, we do not have evidence that the alphabetical citation bias influences whether articles are cited or not. Only after they are cited does the alphabetical citation bias occur.
The previous analysis of the alphabetical citation bias is problematic because (1) we aggregated the data for each letter and (2) citation count data are highly skewed. That is, most articles get cited only a few times, if at all, whereas only a few articles get many citations. Moreover, the analysis ignores potential differences across journals and years. Therefore, we combined our two data sets and conducted an exploratory linear mixed model (LMM) with a log-transformed citation count as the dependent variable, field as a fixed effect, and journal and year as random effects (see the Supplementary Materials for more details on statistical analyses). Because our proposed mechanism of alphabetical citation bias requires at least one citation, we removed from the analysis articles with zero citations. We log-transformed the citation count data because this accounts for the skewed nature of this type of data (Thelwall & Wilson, 2014).
There was extreme evidence for no interaction between letter and field (BF < 0.01). One possible reason for the difference between the aggregated analysis and the article-level analysis is that the mean values in the aggregated analysis are more sensitive to higher citation counts. That is, the alphabetical citation bias is likely to be driven by articles that receive many citations because the more citations an article receives, the more likely it is to be cited again: the “rich get richer” phenomenon (Price, 1976; Barabási, Song, & Wang, 2012). As such, the multitude of articles with very few citations may wash out the bias present in the more highly cited articles. To test this possibility, we conducted a series of analyses, dropping the lower percentiles of the data to assess whether the more highly cited psychology articles were driving the bias. As the citation count percentile increased, the Bayes factors for the presence of an interaction between letter and field also increased (see Fig. S6 in the Supplementary Materials). Specifically, we found moderate to extreme evidence for letter by field interactions above the 50th percentile (i.e., articles that had 10 or more citations), with psychology exhibiting stronger biases than did biology and geoscience: Psychology articles that have moderate to high numbers of citations show the alphabetical citation bias. Therefore, the article-level LMM shows an alphabetical citation bias in the field of psychology in articles with more than 10 citations.
Alphabetizing in-text citations biases how authors cite articles, with a preference for citing authors early in the alphabet. Alphabetized in-text citations tend to show a stronger bias than do chronological or numerical citations, and psychology journals seem to show the bias more strongly than biology and geoscience journals. This effect occurs with moderately and highly cited articles, which in our data set reflected articles that had been cited 10 or more times.
If Huang’s (2015) hypothesis that the overall effect is driven by alphabetized reference lists were correct, we would have observed an equivalent bias in alphabetical and chronological journals, since both use alphabetized reference lists. However, chronological journals show no evidence for a bias, suggesting that the citation biases are not due to how readers scan reference lists. We postulate that when authors extract in-text references from other articles, they focus on the first few citations. Thus, when in-text citations are organized alphabetically, the strongest alphabetical citation bias is observed. These findings exemplify Simon’s (1990) notion of bounded rationality because citation decisions result from an interaction between cognitive processes (more attention devoted to items earlier in a list) and the structure of the environment (citation style).
Our follow-up analyses showed an alphabetical citation bias in psychology articles cited 10 or more times. This is consistent with our proposed mechanism of the bias because articles with many citations are more likely to receive proportionally more citations than those with fewer citations (cumulative advantage or preferential attachment; Price, 1976; Barabási et al., 2012).Footnote 4 Thus, the more cited an article is, the more likely it is to be viewed by others, and the more likely it will appear in a list containing multiple in-text citations. If readers exhibit a primacy effect when reading a list of alphabetized in-text citations, authors with letters earlier in the alphabet will be viewed and cited more, and this effect will be stronger for articles that are highly cited already. Thus, the fact that an article is highly cited in a field that alphabetizes in-text citations makes it more susceptible to the alphabetical citation bias.
These findings have important implications for how authors and readers should use references and interpret citation metrics. How reference information is communicated to readers (i.e., citation styles) differs across journals and is often focused on providing editorial benefits. Editorial perspectives on citation styles, however, do not necessarily account for the psychology of how readers obtain and process information. For instance, although numerical citations save space in journals, they prevent the reader from developing a “mind map” that connects ideas to their creators and builds a temporal progression of idea generation (Clauss, Müller, & Codron, 2013). Chronologically ordered author-date in-text citations likely best facilitate a literature mind map by not only making author names immediately obvious but also by making transparent the chronological order in which ideas progressed. Alphabetically ordered citations also make author identity apparent, but obscure the temporal ordering of scientific progress. Though we cannot entirely circumvent the primacy effect, chronologically ordered citations may emphasize earlier, foundational citations rather than bias citations based on the arbitrary feature of an author’s surname. Thus, journals with chronological in-text citations may facilitate cognitive processing of scientific progress to generate a better understanding of the literature, while minimizing the alphabetical citation bias found here.
The bias demonstrated in our study is also important for our understanding of scientific impact and the evaluation of researchers in academia. From tenure and promotion to obtaining grant funding and evaluating the success of entire institutions, citation metrics are used to evaluate scientific impact and productivity (Lehmann, Jackson, & Lautrup, 2006; Ellison, 2010). Thus, arbitrary conventions that influence citation counts can have measurable influences on scholarly success. In the field of economics, for example, where coauthor orders are sometimes arranged alphabetically, researchers with surnames starting with letters earlier in the alphabet are more likely to achieve academic success, such as obtaining tenure or becoming fellows of professional societies (Einav & Yariv, 2006; Van Praag & Van Praag, 2008; Weber, 2018).
The findings in this study suggest that one possible mechanism for the alphabetical bias in academic success is biased citation rates for early letter surnames. Thus, academic researchers with early letter surnames may be viewed more favorably by committees or institutions due to having greater citation rates, even though these greater rates may be unrelated to the quality of their work. Because of these biases, we agree with Huang’s (2015) admonition that citation metrics are not “objective and unbiased measures of scholarship” (p. 785). These biases must be accounted for.
To conclude, citations are critical components of scholarship, and science benefits when references are evaluated critically and objectively. Unfortunately, this is not always the case, as citation decisions are affected by arbitrary properties other than the quality of the work itself. These decisions result from an interaction between researcher cognition (primacy effect) and the citation environment (in-text citation style). When citation environments do not account for these cognitive processes, biases can occur in citation decisions. One such bias is the alphabetical citation bias, which arises as a primacy effect based on the ordering of in-text citations. In a citation environment with alphabetically ordered in-text citations, the temporal ordering of scholarly progress is obscured and, due to the primacy effect, citation decisions are biased toward authors with surnames occurring early in the alphabet. Chronologically ordered citations maintain key information for the reader and do not result in an arbitrary alphabetical citation bias. Our findings regarding the psychology of citation decisions have direct implications for how journal publishers and editors should design citation environments. We suggest that journals using alphabetical ordering for in-text citations switch to chronological ordering to avoid alphabetical biases, thereby facilitating a more objective and clearer understanding of the literature.
We thank the EEB Behavior Group and the CABIN group at the University of Nebraska-Lincoln for their thoughtful feedback on an early draft/presentation, and Tyler Cully and Sama Mehta for assistance in collecting data. We thank E.-J. Wagenmakers, Henrik Singmann, Johnny van Doorn, and an anonymous reviewer for help with improving the analyses and manuscript.
In psychology, even journals not published by the APA tend to follow the APA style. Exceptions to alphabetical order in psychology journals typically occur because the journal is a part of a larger series of journals across a range of fields (e.g., Trends journals, such as Trends in Cognitive Sciences, which uses the numerical style).
We excluded Behavioural Processes from the analysis because it covers psychology and biology roughly equally.
Behavioral Ecology switched in the middle of 2006, so we omitted this year from our analysis.
Due to our findings, the editors allowed us to maintain chronological ordering of our citations in this article.
American Psychological Association. (2010). Publication manual of the American Psychological Association (4th). Washington, DC: American Psychological Association.
Barabási, A.-L., Song, C., & Wang, D. (2012). Handful of papers dominates citation. Nature, 491(7422), 40.
Berger, J. (2016). Does presentation order impact choice after delay? Topics in Cognitive Science, 8(3), 670–684.
Bigham, J. (1894). Studies from the Harvard Psychological Laboratory (II): B. Memory. Psychological Review, 1(5), 453–461.
Clauss, M., Müller, D. W. H., & Codron, D. (2013). Source references and the scientist’s mind-map: Harvard vs. Vancouver style. Journal of Scholarly Publishing, 44(3), 274–282.
Dubin, D. (2004). The most influential paper Gerard Salton never wrote. Library Trends, 52(4), 748–764.
Einav, L., & Yariv, L. (2006). What’s in a surname? The effects of surname initials on academic success. Journal of Economic Perspectives, 20(1), 175–187.
Ellison, G. (2010). How does the market use citation data? The Hirsch Index in economics (Working Paper No. 16419). Cambridge: National Bureau of Economic Research.
Feenberg, D., Ganguli, I., Gaulé, P., & Gruber, J. (2017). It’s good to be first: Order bias in reading and citing NBER working papers. The Review of Economics and Statistics, 99(1), 32–39.
Huang, W. (2015). Do ABCs get more citations than XYZs? Economic Inquiry, 53(1), 773–789.
Lehmann, S., Jackson, A. D., & Lautrup, B. E. (2006). Measures for measures. Nature, 444(7122), 1003–1004.
Murdock, B. B. J. (1962). The serial position effect of free recall. Journal of Experimental Psychology, 64(5), 482–488.
Price, D. D. S. (1976). A general theory of bibliometric and other cumulative advantage processes. Journal of the American Society for Information Science, 27(5), 292–306.
R Core Team. (2018). R: A language and environment for statistical computing [Computer software]. Vienna: R Foundation for Statistical Computing.
Rouder, J. N., & Morey, R. D. (2012). Default Bayes factors for model selection in regression. Multivariate Behavioral Research, 47(6), 877–903.
Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69(1), 99–118.
Simon, H. A. (1990). Invariants of human behavior. Annual Review of Psychology, 41, 1–20.
Thelwall, M., & Wilson, P. (2014). Regression for citation data: An evaluation of different methods. Journal of Informetrics, 8(4), 963–971.
van Doorn, J., Ly, A., Marsman, M., & Wagenmakers, E.-J. (2018). Bayesian inference for Kendall’s rank correlation coefficient. The American Statistician. https://doi.org/10.1080/00031305.2016.1264998
Van Praag, C. M., & Van Praag, B. M. S. (2008). The benefits of being economics Professor A (rather than Z). Economica, 75(300), 782–796.
Wagenmakers, E.-J. (2007). A practical solution to the pervasive problems of p values. Psychonomic Bulletin & Review, 14(5), 779–804.
Wagenmakers, E.-J., Love, J., Marsman, M., Jamil, T., Ly, A., Verhagen, J., … Morey, R. D. (2018). Bayesian inference for psychology. Part II: Example applications with JASP. Psychonomic Bulletin & Review, 25(1), 58–76.
Weber, M. (2018) The effects of listing authors in alphabetical order: A review of the empirical evidence. Research Evaluation, 27(3), 238–245. https://doi.org/10.1093/reseval/rvy008.
Williams, R. B. (2011). Citation systems in the biosciences: A history, classification and descriptive terminology. Journal of Documentation, 67(6), 995–1014.
About this article
Cite this article
Stevens, J.R., Duque, J.F. Order matters: Alphabetizing in-text citations biases citation rates. Psychon Bull Rev 26, 1020–1026 (2019). https://doi.org/10.3758/s13423-018-1532-8
- Alphabetical order
- Bounded rationality
- Chronological order
- Citation decisions
- Citation style
- Primacy effect