Advertisement

Crowdsourcing Platform for Collecting Cognitive Feedbacks from Users: A Case Study on Movie Recommender System

Chapter
Part of the Springer Series in Reliability Engineering book series (RELIABILITY)

Abstract

The aim of this research is to present a crowdsourcing-based recommendation platform called OurMovieSimilarity (OMS), which can collect and sharecognitive feedbacks from users. In particular, we focus on the user’s cognition patterns on the similarity between the two movies. OMS also analyzes the collected data of the user to classify the user group and dynamic changes movie recommendations for each different user. The purpose of this is to make OMS interact intelligently and the data collected faster and more accurately. We received more than a thousand feedbacks from 50 users and did the analyzes this data to group the user. A group of the users can be dynamically changed, with respect to the selection of each user. OMS now still online and collecting data. We have been trying to enrich the cognitive feedback dataset including more than 20,000 feedbacks from 5000 users, so that the recommendation system can make more accurate analysis of user cognitive in choosing the movie similarity.

Notes

Acknowledgements

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (NRF-2017R1A2B4010774).

References

  1. 1.
    Ricci F, Rokach L, Shapira B (2011) Introduction to recommender systems handbook. In: Recommender systems handbook. Springer, pp 1–35Google Scholar
  2. 2.
    Zheng H, Li D, Hou W (2011) Task design, motivation, and participation in crowdsourcing contests. Int J Electron Commer 15(4):57–88CrossRefGoogle Scholar
  3. 3.
    Baturay MH, Birtane M (2013) Responsive web design: a new type of design for web-based instructional content. Procedia-Soc Behav Sci 106:2275–2279CrossRefGoogle Scholar
  4. 4.
    Howe J (2006) The Rise of Crowdsourcing. Wired Mag 14(06):1–5Google Scholar
  5. 5.
    Schenk E, Guittard C, Others (2009) Crowdsourcing: what can be outsourced to the crowd, and why. In: Workshop on open source innovation, Strasbourg, France, vol 72, p 3Google Scholar
  6. 6.
    Estellés-Arolas E, González-Ladrón-de-Guevara F (2012) Towards an integrated crowdsourcing definition. J Inf Sci 14Google Scholar
  7. 7.
    Janzik L (2010) Contribution and participation in innovation communities: a classification of incentives and motives. Int J Innov Technol Manag 7(03):247–262CrossRefGoogle Scholar
  8. 8.
    Gadiraju U, Kawase R, Dietze S, Demartini G (2015) Understanding malicious behavior in crowdsourcing platforms: the case of online surveys. In: Proceedings of the 33rd annual ACM conference on human factors in computing systems, pp 1631–1640Google Scholar
  9. 9.
    Difallah DE, Catasta M, Demartini G, Ipeirotis PG, Cudré-Mauroux P (2015) The dynamics of micro-task crowdsourcing: The case of amazon mturk. In: Proceedings of the 24th international conference on world wide web, pp 238–247Google Scholar
  10. 10.
    Yu M-C, Wu Y-CJ, Alhalabi W, Kao H-Y, Wu W-H (2016) ResearchGate: An effective altmetric indicator for active researchers? Comput Human Behav 55:1001–1006CrossRefGoogle Scholar
  11. 11.
    Dolan KA (2012) How Ijad Madisch aims to disrupt science research with a social network. List Forbes (2012)Google Scholar
  12. 12.
    Wisniewski EJ, Bassok M (1999) What makes a man similar to a tie? stimulus compatibility with comparison and integration. Cogn Psychol 39(3–4):208–238CrossRefGoogle Scholar
  13. 13.
    Kalakota R, Robinson M (2001) e-Business 2.0: Roadmap for success. Addison-Wesley Longman Publishing Co., Inc. Boston, MA, USA ©2001Google Scholar
  14. 14.
    Ben Schafer JRJ, Konstan J (1999) Recommender systems in E-commerce. In: Proceeding of ACM conference on electronic commerce (EC-99), pp 158–166Google Scholar
  15. 15.
    Ben Schafer J, Konstan JA, Riedl J (2001) E-commerce recommendation applications. Data Min Knowl Discov 5(1–2):115–153Google Scholar
  16. 16.
    Häubl G (2001) Recommending or persuading? the impact of a shopping agent’s algorithm on user behavior. In: Proceedings of the 3rd ACM conference on electronic commerce, pp 163–170Google Scholar
  17. 17.
    Mandel T (1997) The elements of user interface design, vol 20. Wiley, New YorkGoogle Scholar
  18. 18.
    Bouchard M, Jousselme A-L, Doré P-E (2013) A proof for the positive definiteness of the Jaccard index matrix. Int J Approx Reason 54(5):615–626MathSciNetCrossRefGoogle Scholar
  19. 19.
    Luhn HP (1957) A statistical approach to mechanized encoding and searching of literary information. IBM J Res Dev 1(4):309–317MathSciNetCrossRefGoogle Scholar
  20. 20.
    Jones KS (2004) A statistical interpretation of term specificity and its application in retrieval. J DocGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Computer EngineeringChung-Ang UniversitySeoulKorea

Personalised recommendations