Advertisement

Crowdsourcing Peer Review: As We May Do

  • Michael SopranoEmail author
  • Stefano Mizzaro
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 988)

Abstract

This paper describes Readersourcing 2.0, an ecosystem providing an implementation of the Readersourcing approach proposed by Mizzaro [10]. Readersourcing is proposed as an alternative to the standard peer review activity that aims to exploit the otherwise lost opinions of readers. Readersourcing 2.0 implements two different models based on the so-called codetermination algorithms. We describe the requirements, present the overall architecture, and show how the end-user can interact with the system. Readersourcing 2.0 will be used in the future to study also other topics, like the idea of shepherding the users to achieve a better quality of the reviews and the differences between a review activity carried out with a single-blind or a double-blind approach.

Keywords

Scholarly publishing Peer review Crowdsourcing 

References

  1. 1.
    CC Attribution 4.0 International Public License (2010). https://creativecommons.org/licenses/by/4.0/legalcode
  2. 2.
    OpenReview (2016). https://openreview.net/
  3. 3.
    Akst, J.: I Hate Your Paper: many say the peer review system is broken. Here’s how some journals are trying to fix it. Sci. 24, 36 (2010). http://www.the-scientist.com/2010/8/1/36/1/Google Scholar
  4. 4.
    Arms, W.Y.: What are the alternatives to peer review? Quality control in scholarly publishing on the web. JEP 8(1) (2002).  https://doi.org/10.3998/3336451.0008.103
  5. 5.
    De Alfaro, L., Faella, M.: TrueReview: a platform for post-publication peer review. CoRR (2016). http://arxiv.org/abs/1608.07878
  6. 6.
    Dow, S., Kulkarni, A., Klemmer, S., Hartmann, B.: Shepherding the crowd yields better work. In: Proceedings of ACM 2012 CSCW, pp. 1013–1022. ACM (2012)Google Scholar
  7. 7.
    Medo, M., Rushton Wakeling, J.: The effect of discrete vs. continuous-valued ratings on reputation and ranking systems. EPL 91(4), 48004 (2010). http://stacks.iop.org/0295-5075/91/i=4/a=48004CrossRefGoogle Scholar
  8. 8.
    Meyer, B.: Fixing the process of computer science refereeing, October 2010. https://cacm.acm.org/blogs/blog-cacm/100030-fixing-the-process-of-computer-science-refereeing/fulltext
  9. 9.
    Mizzaro, S.: Quality control in scholarly publishing: a new proposal. JASIST 54(11), 989–1005 (2003).  https://doi.org/10.1002/asi.22668CrossRefGoogle Scholar
  10. 10.
    Mizzaro, S.: Readersourcing - a manifesto. JASIST 63(8), 1666–1672 (2012). https://onlinelibrary.wiley.com/doi/abs/10.1002/asi.22668CrossRefGoogle Scholar
  11. 11.
    Soprano, M., Mizzaro, S.: Readersourcing 2.0: RS\(\_\)PDF, October 2018.  https://doi.org/10.5281/zenodo.1442597
  12. 12.
    Soprano, M., Mizzaro, S.: Readersourcing 2.0: RS\(\_\)Rate, October 2018.  https://doi.org/10.5281/zenodo.1442599
  13. 13.
    Soprano, M., Mizzaro, S.: Readersourcing 2.0: RS\(\_\)Server, October 2018.  https://doi.org/10.5281/zenodo.1442630
  14. 14.
    Soprano, M., Mizzaro, S.: Readersourcing 2.0: technical documentation, October 2018.  https://doi.org/10.5281/zenodo.1443371
  15. 15.
    Sun, Y., et al.: Crowdsourcing information extraction for biomedical systematic reviews. CoRR abs/1609.01017 (2016)Google Scholar
  16. 16.
    The Economist: Some science journals that claim to peer review papers do not do so (2018). https://www.economist.com/science-and-technology/2018/06/23/some-science-journals-that-claim-to-peer-review-papers-do-not-do-so
  17. 17.
    Tomkins, A., Zhang, M., Heavlin, W.D.: Reviewer bias in single- versus double-blind peer review. PNAS 114(48), 12708–12713 (2017). http://www.pnas.org/content/114/48/12708CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Mathematics, Computer Science, and PhysicsUniversity of UdineUdineItaly

Personalised recommendations