A framework to restrict viewing of peer commentary on Web objects based on trust modeling

  • J. ChampaignEmail author
  • R. Cohen
  • N. Sardana
  • J. A. Doucette
Original Article


In this paper, we present a framework aimed at assisting users in coping with the deluge of information within social networks. We focus on the scenario where a user is trying to digest feedback provided on a Web document (or a video) by peers. In this context, it is ideal for the user to be presented with a restricted view of all the commentary, namely those messages that are most beneficial in increasing the user’s understanding of the document. Operating within the computer science subfield of artificial intelligence, the centerpiece of our approach is a modeling of the trustworthiness of the person leaving commentary (the annotator), determined on the basis of ratings provided by peers, adjusted by a modeling of the similarity of those peers to the current user. We compare three competing formulae for restricting what is shown to users which vary in the extent to which they integrate trust modeling, to emphasize the value of this component. By simulating the knowledge gains achieved by users (inspired by methods used in peer-based intelligent tutoring), we are able to validate the effectiveness of our algorithms. Overall, we offer a framework to make the Social Web a viable source of information, through effective modeling of the credibility of peers. When peers are misguided or deceptive, our approach is able to remove these messages from consideration, for the user.


Modeling user trust and credibility Social Web Selecting user commentary on web objects Reducing information overload 


  1. Breese JS, Heckerman D, Kadie C (1998) Empirical analysis of predictive algorithms for collaborative filtering. Morgan Kaufmann Publishers, BurlingtonGoogle Scholar
  2. Champaign J, Cohen R (2010) A distillation approach to refining learning objects. In: Proceedings of 2010 educational data mining conference, pp 283–284Google Scholar
  3. Champaign J, Zhang J, Cohen R (2011) Coping with poor advice from peers in peer-based intelligent tutoring: the case of avoiding bad annotations of learning objects. In: Proceedings of user modeling, adaptation and personalization (UMAP), pp 38–49Google Scholar
  4. Choi S, Cha S, Tappert CC (2010) A Survey of binary similarity and distance measures, J Syst Cybern Inf 8(1):43–48Google Scholar
  5. Gorner J, Zhang J, Cohen R (2011) Improving the use of advisor networks for multi-agent trust modelling. In: 2011 9th Annual international conference on privacy, security and trust (PST), pp 71–78Google Scholar
  6. Hang CW, Singh MP (2012) Generalized framework for personalized recommendations in agent networks. Auton Agents Multi-Agent Syst 25(3):475−498Google Scholar
  7. Hang CW, Zhang Z, Singh MP (2012) Generalized trust propagation with limited evidence. IEEE Comput 46(3):78–85Google Scholar
  8. Herlocker JL, Konstan JA, Terveen LG, Riedl JT (2004) Evaluating collaborative filtering recommender systems. ACM Trans Inf Syst Secur (TISSEC) 22(1):5–53CrossRefGoogle Scholar
  9. Jsang A, Ismail R (2002) The beta reputation system. In: Proceedings of the 15th Bled conference on electronic commerceGoogle Scholar
  10. Matsuda N, Cohen WW, Sewall J, Lacerda G, Koedinger KR (2007) Evaluating a simulated student using real students data for training and testing. In: User modeling, pp 107–116Google Scholar
  11. McCalla G (2004) The ecological approach to the design of e-learning environments: purpose-based capture and use of information about learners. J Interact Media Educ 7:1–23Google Scholar
  12. Nepal S, Bista SK, Paris C (2012a) An association based approach to propagate social trust in social networks. In: Workshop and poster proceedings of the 20th conference on user modeling, adaptation, and personalization (TRUM 2012)Google Scholar
  13. Nepal S, Bista SK, Paris C (2012b) SRec: a social behaviour based recommender for online communities. In: Workshop and poster proceedings of the 20th conference on user modeling, adaptation, and personalization (SRS 2012)Google Scholar
  14. Nepal S, Sherchan W, Paris C (2011) Building trust communities using social trust. In: UMAP 2011 workshops, pp 243–255Google Scholar
  15. Natural Sciences and Engineering Research Council of Canada (NSERC). NSERC Strategic Network: Healthcare support through information technology enhancements. PI: Dr. D. PlantGoogle Scholar
  16. Sardana N (2013) Recommending messages to users in participatory media environments: a bayesian credibility approach. Master’s thesis, University of WaterlooGoogle Scholar
  17. Seth A, Zhang J, Cohen R (2010) Bayesian credibility modeling for personalized recommendation in participatory media. In: De Bra P, Kobsa A, Chin D (eds) User modeling, adaptation, and personalization, vol 6075. Lecture notes in computer science. Springer, Berlin, pp 279–290Google Scholar
  18. Teacy WTL, Patel J, Jennings NR, Luck M (2006) Travos: trust and reputation in the context of inaccurate information sources. Auton Agents Multi-Agent Syst 12(2):183–198CrossRefGoogle Scholar
  19. VanLehn K, Ohlsson S, Nason R (1996) Applications of simulated students: an exploration. J Artif Intell Educ 5:135–175Google Scholar
  20. Vassileva J, McCalla G, Greer J (2003) Multi-agent multi-user modeling in I-Help. User Model User-Adapt Interact 13:179–210CrossRefGoogle Scholar
  21. Zhang J, Cohen R (2007) A comprehensive approach for sharing semantic web trust ratings. Comput Intell 23(3):302–319CrossRefMathSciNetGoogle Scholar
  22. Zhang J, Cohen R (2008) Evaluating the trustworthiness of advice about seller agents in e-marketplaces: a personalized approach. Electron Commer Res Appl 7(3):330–340CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Wien 2014

Authors and Affiliations

  • J. Champaign
    • 1
    Email author
  • R. Cohen
    • 1
  • N. Sardana
    • 1
  • J. A. Doucette
    • 1
  1. 1.University of WaterlooWaterlooCanada

Personalised recommendations