Trust Based Evaluation of Wikipedia’s Contributors
Wikipedia is an encyclopedia on which anybody can change its content. Some users, self-proclaimed “patrollers”, regularly check recent changes in order to delete or correct those which are ruining articles integrity. The huge quantity of updates leads some articles to remain polluted a certain time before being corrected. In this work, we show how a multiagent trust model can help patrollers in their task of controlling the Wikipedia. To direct the patrollers verification towards suspicious contributors, our work relies on a formalisation of Castelfranchi & Falcone’s social trust theory to assist them by representing their trust model in a cognitive way.
KeywordsTrust Model Multiagent System Trust Management Trust Assessment Assistant Agent
Unable to display preview. Download preview PDF.
- 1.Castelfranchi, C., Falcone, R.: Social trust: A cognitive approach. In: Castelfranchi, C., Tan, Y.H. (eds.) Trust and Deception in Virtual Societies, pp. 55–90. Kluwer, Dordrecht (2001)Google Scholar
- 4.Zacharia, G., Moukas, A., Maes, P.: Collaborative reputation mechanisms in electronic marketplaces. In: Proceedings of the Hawaii International Conference on System Sciences (HICSS-32), Maui, Hawaii, United States of America, vol. 08, p. 8026. IEEE Computer Society, Washington (1999)Google Scholar
- 5.Sabater-Mir, J., Paolucci, M., Conte, R.: Repage: Reputation and image among limited autonomous partners. Journal of Artificial Societies and Social Simulation 9(2), 3 (2006)Google Scholar
- 6.Lorini, E., Herzig, A., Hübner, J.F., Vercouter, L.: A logic of trust and reputation. Logic Journal of the IGPL (2009)Google Scholar