Search Computing pp 207-222 | Cite as
Extending Search to Crowds: A Model-Driven Approach
Abstract
In many settings, the human opinion provided by an expert or knowledgeable user can be more useful than factual information retrieved by a search engine. Search systems do not capture the subjective opinions and recommendations of friends, or fresh, online-provided information that require contextual or domain-specific expertise. Search results obtained from conventional search engines can be complemented by crowdsearch, an online interaction with crowds, selected among friends, experts, or people who are presently at a given location; an interplay between conventional and search-based queries can occur, so that the two search methods can support each other. In this paper, we use a model-driven approach for specifying and implementing a crowdsearch application; in particular we define two models: the “Query Task Model”, representing the meta-model of the query that is submitted to the crowd and the associated answers; and the “User Interaction Model”, showing how the user can interact with the query model to fulfil her needs. Our solution allows for a top-down design approach, from the crowd-search task design, down to the crowd answering system design. Our approach also grants automatic code generation, thus leading to quick prototyping of crowd-search applications.
Keywords
Exploratory Search Social Networking Platform Social Platform Amazon Mechanical Turk Automatic Code GenerationPreview
Unable to display preview. Download preview PDF.
References
- 1.Amazon mechanical turk, https://www.mturk.com
- 2.
- 3.Webml, http://www.webml.org
- 4.Ariely, D., Gneezy, U., Loewenstein, G., Mazar, N.: Large Stakes and Big Mistakes. Review of Economic Studies 75, 1–19 (2009)Google Scholar
- 5.Bernstein, M.S., Little, G., Miller, R.C., Hartmann, B., Ackerman, M.S., Karger, D.R., Crowell, D., Panovich, K.: Soylent: a word processor with a crowd inside. In: Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, UIST 2010, pp. 313–322. ACM, New York (2010)CrossRefGoogle Scholar
- 6.Bozzon, A., Brambilla, M., Ceri, S.: Answering search queries with crowdsearcher. In Proceedings of the World Wide Web Conference (WWW 2012) (page in print, 2012)Google Scholar
- 7.Brambilla, M., Butti, S., Fraternali, P.: WebRatio BPM: A Tool for Designing and Deploying Business Processes on the Web. In: Benatallah, B., Casati, F., Kappel, G., Rossi, G. (eds.) ICWE 2010. LNCS, vol. 6189, pp. 415–429. Springer, Heidelberg (2010)CrossRefGoogle Scholar
- 8.Brambilla, M., Ceri, S., Fraternali, P., Manolescu, I.: Process modeling in web applications. ACM Trans. Softw. Eng. Methodol. 15(4), 360–409 (2006)CrossRefGoogle Scholar
- 9.Ceri, S., Fraternali, P., Bongio, A., Brambilla, M., Comai, S., Matera, M.: Designing data-intensive Web applications. Morgan Kaufmann, USA (2003)Google Scholar
- 10.Franklin, M.J., Kossmann, D., Kraska, T., Ramesh, S., Xin, R.: CrowdDB: answering queries with crowdsourcing. In: Proceedings of the 2011 International Conference on Management of Data, SIGMOD 2011, pp. 61–72. ACM, New York (2011)Google Scholar
- 11.Kittur, A., Smus, B., Kraut, R.: CrowdForge: crowdsourcing complex work. In: Proceedings of the 2011 Annual Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA 2011, pp. 1801–1806. ACM, New York (2011)CrossRefGoogle Scholar
- 12.Kulkarni, A.P., Can, M., Hartmann, B.: Turkomatic: automatic recursive task and workflow design for mechanical turk. In: Proceedings of the 2011 Annual Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA 2011, pp. 2053–2058. ACM, New York (2011)CrossRefGoogle Scholar
- 13.Mason, W., Watts, D.J.: Financial incentives and the ”performance of crowds”. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP 2009, pp. 77–85. ACM, New York (2009)CrossRefGoogle Scholar
- 14.Parameswaran, A., Polyzotis, N.: Answering queries using humans, algorithms and databases. In: Conference on Inovative Data Systems Research (CIDR 2011). Stanford InfoLab (January 2011)Google Scholar
- 15.Yan, T., Kumar, V., Ganesan, D.: Crowdsearch: exploiting crowds for accurate real-time image search on mobile phones. In: Proceedings of the 8th International Conference on Mobile Systems, Applications, and Services, MobiSys 2010, pp. 77–90. ACM, New York (2010)CrossRefGoogle Scholar
- 16.Garlandini, S., Fabrikant, S.I.: Evaluating the Effectiveness and Efficiency of Visual Variables for Geographic Information Visualization. In: Hornsby, K.S., Claramunt, C., Denis, M., Ligozat, G. (eds.) COSIT 2009. LNCS, vol. 5756, pp. 195–211. Springer, Heidelberg (2009)CrossRefGoogle Scholar
- 17.Fox, E.A., Hix, D., Nowell, L.T., Brueni, D.J., Wake, W.C., Heath, L.S., Rao, D.: Users, user interfaces, and objects: Envision, a digital library. Journal of the American Society for Information Science 44(8), 480–491 (1993)CrossRefGoogle Scholar
- 18.Takatsuka, M., Gahegan, M.: GeoVISTA Studio: A Codeless Visual Programming Environment For Geoscientific Data Analysis and Visualization. In: Computational Geoscience, vol. 28, pp. 1131–1144 (2002)Google Scholar
- 19.Wattenberg, M.: Visual exploration of multivariate graphs. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI 2006, pp. 811–819. ACM, New York (2006)CrossRefGoogle Scholar
- 20.Aula, A., et al.: How does search behaviour change as search becomes more difficult? In: Proc. 28th International Conference on Human Factors in Computing Systems, HCI, Atlanta, GA, USA, pp. 35–44 (2010)Google Scholar
- 21.Doan, A., Ramakrishnan, R., Halevy, A.: Crowdsourcing Systems on the World-Wide Web. Communications of the ACM (April 2011)Google Scholar
- 22.Yan, T., Kumar, V., Ganesan, D.: CrowdSearch: exploiting crowds for accurate real-time image search. In: Proc. 8th Int. Conference on Mobile Systems, Applications, and Services – MOBISYS, S. Francisco, CA, pp. 77–90 (2010)Google Scholar
- 23.Franklin, M.J., et al.: CrowdDB: answering queries with crowdsourcing. In: Proceedings of the 2011 International Conference on Management of Data (SIGMOD 2011), pp. 61–72. ACM, New York (2011)Google Scholar
- 24.Marcus, A., et al.: Crowdsourced Databases: Query Processing with People. In: Conference on Innovative Data Systems Research, Asilomar, CA, pp. 211–214 (2011)Google Scholar
- 25.Parameswaran, A., Polyzotis, N.: Answering Queries using Databases, Humans and Algorithms. In: Conference on Innovative Data Systems Research 2011, Asilomar, CA, pp. 160–166 (2011)Google Scholar
- 26.Baeza-Yates, R., Raghavan, P.: Next Generation Web Search. In: Ceri, S., Brambilla, M. (eds.) Search Computing. LNCS, vol. 5950, pp. 11–23. Springer, Heidelberg (2010)CrossRefGoogle Scholar
- 27.Marchionini, G.: Exploratory Search: from Finding to Understanding. Communications of the ACM, 41–46 (2006)Google Scholar
- 28.Bozzon, A., Brambilla, M., Ceri, S., Fraternali, P.: Liquid Query: Multi-Domain Exploratory Search on the Web. In: WWW 2010, Raleigh, USA, pp. 161–170. ACM, New York (2010)Google Scholar
- 29.Bozzon, A., et al.: Exploratory Search in Multi-Domain Information Spaces with Liquid Query. In: Proc. WWW 2011 - Demo, Hyderabad, India, pp. 189–192. ACM, New York (2011)Google Scholar
- 30.Marcus, A., Wu, E., Karger, D., Madden, S., Miller, R.: Humanpowered Sorts and Joins. PVLDB 5(1), 13–24 (2011)Google Scholar
- 31.Kumar, A., Lease, M.: Modeling Annotator Accuracies for Supervised Learning. In: Proc. Crowdsourcing for Search and Data Mining Workshop – CSDM, Hong-Kong, China (2011)Google Scholar
- 32.Bernstein, M.S., et al.: Soylent: a word processor with a crowd inside. In: Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, pp. 313–322. ACM, New YorkGoogle Scholar
- 33.Kulkarni, A.P., Can, M., Hartmann, B.: Turkomatic: Automatic Recursive Task and Workflow Design for Mechanical Turk. In: Proc. Extended Abstracts on Human Factors in Computing Systems - CHI EA, Vancouver, CA, pp. 2053–2058 (2011)Google Scholar
- 34.Mason, W.A., Watts, D.J.: Financial Incentives and the ”Performance of Crowds”. In: KDD Workshop on Human Computation, Paris, France, pp. 77–85 (2009)Google Scholar
- 35.Ariely, D., et al.: Large Stakes and Big Mistakes. Review of Economic Studies 76(2), 451–469 (2009)MATHCrossRefGoogle Scholar
- 36.Chilton, et al.: Task search in a human computation market. In: ACM SIGKDD Workshop on Human Computation (HCOMP 2010), pp. 1–9. ACM Press, New York (2010)CrossRefGoogle Scholar
- 37.Morris, M.R.: A survey of Collaborative Web Search practices. In: Proc. SIGCHI Conference on Human Factors in Computing Systems, Florence, pp. 1657–1666 (2008)Google Scholar