Skip to main content

Reproducibility issues with correlating Beall-listed publications and research awards at a small Canadian business school

Abstract

The issue of “predatory” publishing began to directly affect academics following the creation of two blacklists by a US librarian, Jeffrey Beall. This paper provides a post-publication replication examination of a study that became a high media-profile case. That study involves Derek Pyne, who incorrectly claimed in a 2017 publication in the Journal of Scholarly Publishing that numerous research faculty members at a small business school in Canada were financially remunerated for Beall-blacklisted publications relative to those that did not have such publications. This paper presents new evidence why the claims made in that paper are erroneous. That research claimed to show evidence that Beall’s list of potential predatory journals were positively correlated with internal research awards based on a simple correlation without reporting p-values. We show that the correlation between Beall and ranked journal publications with awards is not statistically significantly different from zero after computing p-values and confidence intervals based on his published results, and by collecting the data from the same public domains as Pyne. We also show that the correlation between Beall and unranked publications with awards depends on only two observations and cannot lead to generalizations.

This is a preview of subscription content, access via your institution.

Notes

  1. Unfortunately, Pyne has not provided his data on several occasions in response to formal requests by both authors. To date, he has also not made the research faculty data public in an un-identified format, neither to his university nor to the Journal of Scholarly Publishing.

  2. The definition of replication by the National Science Foundation's Subcommittee on Robust Research in their 2016 report is: “The ability of a researcher to duplicate the results of a prior study if the same procedures are followed but new data are collected…” (NAS 2016).

  3. Another reason why we do not need Pyne’s dataset is because we are not questioning if his data produced the results he reported in his paper. We are of the opinion that his dataset did produce the results in his paper. Our study can be seen as an independent in-depth analysis of the issues with the small business school and “predatory” publishing.

  4. It is important to note that Pyne’s and our data are not original (i.e., not “the very first”: https://www.vocabulary.com/dictionary/original).

  5. Understanding and identifying causation has been around from ancient times (https://plato.stanford.edu/entries/aristotle-causality/). David Hume wrote about causation in “A Treatise of Human Nature” published in 1738. See also a publication back in 1921 when this was formalized by Wright (1921). Claiming causation based on a simple linear correlation between two variables, when many other variables are not controlled for, cannot be considered strong science-based evidence.

  6. Makin and de Xivry (2019) suggested that the existence of false positives can create spurious correlations.

  7. Pyne stated: “Publications in Beall-listed journals are statistically insignificant.” Seemingly ignoring this insignificance, he went on to conclude that “in terms of financial rewards, predatory publications are preferable to A*, B, and C publications.” (p. 153).The rankings (A*, A, B, C) refers to the Australian Business Deans Council (ABDC) Business Journal Quality List: http://www.abdc.edu.au/pages/abdc-journal-quality-list-2013.html; http://www.abdc.edu.au/pages/2016-review.html.

  8. We computed the 95% CIs as the study only documented the value of the estimated coefficients and their p-values.

  9. We thank the anonymous reviewer for suggesting that we report this test of equality of coefficients.

  10. On page 153 Pyne stated: “The positive and statistically significant coefficient for A publications is an exception. It implies that A publications are rewarded enough to compensate for lost income from extra teaching.”

  11. The words in parentheses are to connect the quote with Pyne’s findings.

  12. Also see Ioannidis et al. (2017) for discussion on the bias and power of economic research to detect true size effects.

  13. Pyne’s only comment about this result was (p. 154): “The negative sign for the research award variable may seem unexpected.”

  14. Pyne stated (p. 155): “Initially, attempts were made to use logistic regressions to determine the type of research for which these rewards are given. However, given the small sample size, the results were very sensitive to initial conditions.” Here, he seems to want to explore if the type of research depends on research awards.

  15. Pyne stated, right after the quote in his footnote 8 (p. 155): “For example, the results changed when the 2016 award was added. In addition, the only statistically significant variable (when the last award was added) was the number of predatory publications, which had a positive effect.” Curiously, in this quote, the direction of causation changed. He claimed that the type of research (predatory publications) determined research awards.

  16. Self-nomination is allowed with these internal awards at the small business school.

  17. This multiple testing indicates interest by the researcher to get eventually some desired significant results.

  18. Given that p-values were reported and extensively discussed by Pyne for the salary regression (his Table 5), the absence of p-values for the simple correlation estimation is surprising.

  19. The population correlation differs from the sampled correlation because of measurement and sampling errors. The latter provides an estimate for the former. Pyne used classical regression analysis and statistical inference to determine the factors that influence salary and in the same spirit he should have reported the p-values of the sample correlations so the reader can make an inference about the population value. However, he did not report p-values for the sample correlations.

  20. p-values can be computed easily by using \( t = \left[ {(n - 2)\frac{{r^{2} }}{{1 - r^{2} }}} \right]^{1/2} \) with n − 2 degrees of freedom and standard statistical software to provide such important information to researchers (Shen and Lu 2006).

  21. The difference is due to the transformation of the correlation coefficient having an approximate value for the variance.

  22. For the correlation findings, Pyne stated: “The strongest positive correlation with research awards is the number of publications in predatory journals (0.714). This is followed by being a recipient of either a teaching and/or a service award (0.664) and being a member of the committee the year one receives an award (0.461). The rank of associate professor, the number of articles in B-ranked journals, and the average number of publications per year also have relatively strong positive correlations. The number of articles in journals ranked A* and A have smaller but negative correlations. These statistics indicate that the research awards are unlikely to be significant motivators for publishing in high-quality, or even simply non-predatory, journals.” Italics added for emphasis. It is not clear why these results are unlikely to be a motivator for publishing in journals not in Beall’s blacklist. For example, B-ranked journals have a strong positive correlation. Furthermore, no correlation does not mean that the examining committee ignores this information when assessing a candidate.

  23. Even the p-values we computed for Pyne’s results and our p-values have a very high correlation r = 0.79 with a p value of 0.003 and without the B-ranked journals the correlation increases to r = 0.86 with a p value of 0.001. P-values show the likelihood of obtaining a test statistics that is as extreme or more as than the study obtained from that sample drawing. The sample we drew and our comparable p-values confirms that this is the case except for the B-ranked journals. The correlation between the B-ranked journals and research awards, although important and discussed in our paper, is not the focus of Pyne’s paper or our paper as these are not considered predatory journals.

  24. On page 138 Pyne stated: “In addition to this principal finding, a few other observations are made, including limited evidence of the effects of predatory journal publications on being hired, receiving tenure and promotion, and getting an internal research award.” Italics added for emphasis.

  25. He stated on p. 138: “In addition to this principal finding, a few other observations are made, including limited evidence of the effects of predatory journal publications on being hired, receiving tenure and promotion, and getting an internal research award.” Italics added for emphasis. He also stated in a comment on his Facebook page on May 10, 2017 that he can only speculate on tenure and promotion: “I would loved [sic] to have had access to the data of what people included on their P&T applications. Unfortunately, other then [sic] the limited related data presented in the paper, I can only speculate.” See: https://www.facebook.com/derek.pyne.18.

  26. On December 6th 2018, Pyne’s ban was lifted by his university.

  27. See outcome: https://www.caut.ca/latest/2019/11/report-academic-freedom-breached-thompson-rivers-university.

  28. See: https://www.insidehighered.com/news/2018/11/26/canadian-scholar-says-hes-been-persecuted-his-research-colleagues-who-published. For CAUT and the BC Labour Board complaints see https://www.facebook.com/derek.pyne.18?ref=br_rs.

  29. The BC Labour Board dismissed Pyne’s complaint against his Union and TRU. See full report under decision number B94/2019 issued on July 16, 2019 (http://www.lrb.bc.ca/decisions/B094$2019.pdf) and denial of reconsideration B131/2019 issued on September 26, 2019 (http://www.lrb.bc.ca/decisions/B131$2019.pdf): http://www.lrb.bc.ca/decisions/NUM2019.HTM).

  30. For example, The Chronicle of Higher Education (https://www.chronicle.com/article/Does-It-Pay-to-Be-Published-in/240202) published a piece entitled “Does It Pay to Be Published in ‘Predatory’ Journals?” which states: “A recent study of one university's business school, however, found….publication in such journals was positively correlated with income.” Italics added for emphasis. Why did he not correct this incorrect conclusion? Pyne did not find a positive correlation with income but actually found a negative association. He found that Beall’s and unranked with income were highly statistically insignificant and he chose to drop this from his preferred model.

  31. See Pyne’s Google scholar profile (https://scholar.google.ca/citations?hl=en&user =75P5jgGgEvQC) which indicates that currently (December 18, 2019) the number of citations to his paper amounts to 47 (43 to account for duplicates), almost one third of his total citations: https://scholar.google.ca/scholar?oi=bibshl=en&cites=5375323568168537275.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jaime A. Teixeira da Silva.

Ethics declarations

Conflicts of interest

The authors declare no conflicts of interest. The authors have challenged the findings of the Pyne paper on numerous occasions. The first author of this paper (Panagiotis Tsigaris), works at the same institute as Derek Pyne, and has an interest in re-examining his colleague’s controversial findings and conclusions as his and his institution’s reputation are at stake. The authors declare no other relevant conflicts of interest.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Tsigaris, P., Teixeira da Silva, J.A. Reproducibility issues with correlating Beall-listed publications and research awards at a small Canadian business school. Scientometrics 123, 143–157 (2020). https://doi.org/10.1007/s11192-020-03353-4

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-020-03353-4

Keywords

  • Biases
  • Beall’s blacklists
  • Reproducibility crisis
  • Research spin
  • Rewards
  • Underpowered studies