Skip to main content

A Two-Stage Mechanism for Ordinal Peer Assessment

  • Conference paper
  • First Online:
Algorithmic Game Theory (SAGT 2018)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11059))

Included in the following conference series:

  • 1138 Accesses

Abstract

Peer assessment is a major method for evaluating the performance of employee, accessing the contributions of individuals within a group, making social decisions and many other scenarios. The idea is to ask the individuals of the same group to assess the performance of the others. Scores or rankings are then determined based on these evaluations. However, peer assessment can be biased and manipulated, especially when there is a conflict of interests. In this paper, we consider the problem of eliciting the underlying ordering (i.e. ground truth) of n strategic agents with respect to their performances, e.g., quality of work, contributions, scores, etc. We first prove that there is no deterministic mechanism which obtains the underlying ordering in dominant-strategy implementation. Then, we propose a Two-Stage Mechanism in which truth-telling is the unique strict Nash equilibrium yielding the underlying ordering. Moreover, we prove that our two-stage mechanism is asymptotically optimal, since it only needs \(n+1\) queries and we prove an \(\varOmega (n)\) lower bound on query complexity for any mechanism. Finally, we conduct experiments on several scenarios to demonstrate that the proposed two-stage mechanism is robust.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Alon, N., Fischer, F., Procaccia, A., Tennenholtz, M.: Sum of us: strategyproof selection from the selectors. In: Proceedings of the 13th Conference on Theoretical Aspects of Rationality and Knowledge, pp. 101–110. ACM (2011)

    Google Scholar 

  2. Aziz, H., Lev, O., Mattei, N., Rosenschein, J.S., Walsh, T.: Strategyproof peer selection: mechanisms, analyses, and experiments. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI) (2016)

    Google Scholar 

  3. Aziz, H., Lev, O., Mattei, N., Rosenschein, J.S., Walsh, T.: Strategyproof peer selection using randomization, partitioning, and apportionment. arXiv preprint arXiv:1604.03632 (2016)

  4. Blocki, J., Christin, N., Datta, A., Procaccia, A.D., Sinha, A.: Audit games. In: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI), pp. 41–47 (2013)

    Google Scholar 

  5. Blocki, J., Christin, N., Datta, A., Procaccia, A.D., Sinha, A.: Audit games with multiple defender resources. In: Proceedings of AAAI, pp. 791–797 (2015)

    Google Scholar 

  6. Bradley, R.A., Terry, M.E.: Rank analysis of incomplete block designs: I. The method of paired comparisons. Biometrika 39(3/4), 324–345 (1952)

    Article  MathSciNet  Google Scholar 

  7. Cao, W., Li, J., Tao, Y., Li, Z.: On top-k selection in multi-armed bandits and hidden bipartite graphs. In: Advances in Neural Information Processing Systems (NIPS), pp. 1036–1044 (2015)

    Google Scholar 

  8. Carbonara, A.U., Datta, A., Sinha, A., Zick, Y.: Incentivizing peer grading in MOOCs: an audit game approach. In: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI) (2015)

    Google Scholar 

  9. Chai, C., Li, G., Li, J., Deng, D., Feng, J.: Cost-effective crowdsourced entity resolution: a partial-order approach. In: Proceedings of the 2016 International Conference on Management of Data, pp. 969–984. ACM (2016)

    Google Scholar 

  10. Fischer, F., Klimm, M.: Optimal impartial selection. In: Proceedings of the Fifteenth ACM Conference on Economics and Computation (EC), pp. 803–820. ACM (2014)

    Google Scholar 

  11. Gao, A., Wright, J.R., Leyton-Brown, K.: Incentivizing evaluation via limited access to ground truth: peer-prediction makes things worse. arXiv preprint arXiv:1606.07042 (2016)

  12. Gibbard, A.: Manipulation of voting schemes: a general result. Econ. J. Econ. Soc. 41, 587–601 (1973)

    MathSciNet  MATH  Google Scholar 

  13. Jurca, R., Faltings, B.: Enforcing truthful strategies in incentive compatible reputation mechanisms. In: Deng, X., Ye, Y. (eds.) WINE 2005. LNCS, vol. 3828, pp. 268–277. Springer, Heidelberg (2005). https://doi.org/10.1007/11600930_26

    Chapter  Google Scholar 

  14. Jurca, R., Faltings, B.: Mechanisms for making crowds truthful. J. Artif. Intell. Res. 34, 209–253 (2009)

    Article  MathSciNet  Google Scholar 

  15. Kahng, A., Kotturi, Y., Kulkarni, C., Kurokawa, D., Procaccia, A.D.: Ranking wily people who rank each other. Technical report (2017)

    Google Scholar 

  16. Kulkarni, C., Wei, K.P., Le, H., Chia, D., Papadopoulos, K., Cheng, J., Koller, D., Klemmer, S.R.: Peer and self assessment in massive online classes. In: Plattner, H., Meinel, C., Leifer, L. (eds.) Design Thinking Research, pp. 131–168. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-06823-7_9

    Chapter  Google Scholar 

  17. Kurokawa, D., Lev, O., Morgenstern, J., Procaccia, A.D.: Impartial peer review. In: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI) (2015)

    Google Scholar 

  18. Lee, C.J., Sugimoto, C.R., Zhang, G., Cronin, B.: Bias in peer review. J. Am. Soc. Inf. Sci. Technol. 64(1), 2–17 (2013)

    Article  Google Scholar 

  19. Liu, T.Y.: Learning to rank for information retrieval. Found. Trends Inf. Retr. 3(3), 225–331 (2009)

    Article  Google Scholar 

  20. Mallows, C.L.: Non-null ranking models. I. Biometrika 44(1/2), 114–130 (1957)

    Article  MathSciNet  Google Scholar 

  21. Marsh, H.W., Jayasinghe, U.W., Bond, N.W.: Improving the peer-review process for grant applications: reliability, validity, bias, and generalizability. Am. Psychol. 63(3), 160 (2008)

    Article  Google Scholar 

  22. Merrifield, M.R., Saari, D.G.: Telescope time without tears: a distributed approach to peer review. Astron. Geophys. 50(4), 4–16 (2009)

    Article  Google Scholar 

  23. Mi, F., Yeung, D.Y.: Probabilistic graphical models for boosting cardinal and ordinal peer grading in moocs. In: Proceedings of AAAI, pp. 454–460 (2015)

    Google Scholar 

  24. Piech, C., Huang, J., Chen, Z., Do, C., Ng, A., Koller, D.: Tuned models of peer assessment in MOOCs. In: Educational Data Mining 2013 (2013)

    Google Scholar 

  25. Raman, K., Joachims, T.: Methods for ordinal peer grading. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1037–1046. ACM (2014)

    Google Scholar 

  26. Raman, K., Joachims, T.: Bayesian ordinal peer grading. In: Proceedings of the Second (2015) ACM Conference on Learning @ Scale, pp. 149–156. ACM (2015)

    Google Scholar 

  27. Roos, M., Rothe, J., Scheuermann, B.: How to calibrate the scores of biased reviewers by quadratic programming. In: Proceedings of AAAI (2011)

    Google Scholar 

  28. Saidman, L.J.: Unresolved issues relating to peer review, industry support of research, and conflict of interest. Anesthesiology 80(3), 491 (1994)

    Google Scholar 

  29. Satterthwaite, M.A.: Strategy-proofness and arrow’s conditions: existence and correspondence theorems for voting procedures and social welfare functions. J. Econ. Theory 10(2), 187–217 (1975)

    Article  MathSciNet  Google Scholar 

  30. Smith, R.: Peer review: a flawed process at the heart of science and journals. J. R. Soc. Med. 99(4), 178–182 (2006)

    Article  Google Scholar 

  31. Wilson, H.G.: Parameter estimation for peer grading under incomplete design. Educ. Psychol. Meas. 48(1), 69–81 (1988)

    Article  Google Scholar 

  32. Zhou, Y., Chen, X., Li, J.: Optimal PAC multiple arm identification with applications to crowdsourcing. In: International Conference on Machine Learning (ICML), pp. 217–225 (2014)

    Google Scholar 

Download references

Acknowledgments

This research is supported in part by the National Basic Research Program of China Grant 2015CB358700, the National Natural Science Foundation of China Grant 61772297, 61632016, 61761146003, and a grant from Microsoft Research Asia. The authors would like to thank Pingzhong Tang and the anonymous reviewers for their valuable comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jian Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, Z., Zhang, L., Fang, Z., Li, J. (2018). A Two-Stage Mechanism for Ordinal Peer Assessment. In: Deng, X. (eds) Algorithmic Game Theory. SAGT 2018. Lecture Notes in Computer Science(), vol 11059. Springer, Cham. https://doi.org/10.1007/978-3-319-99660-8_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-99660-8_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-99659-2

  • Online ISBN: 978-3-319-99660-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics