Advertisement

The VLDB Journal

, Volume 26, Issue 4, pp 511–535 | Cite as

Argument discovery via crowdsourcing

  • Quoc Viet Hung NguyenEmail author
  • Chi Thang Duong
  • Thanh Tam Nguyen
  • Matthias Weidlich
  • Karl Aberer
  • Hongzhi Yin
  • Xiaofang Zhou
Regular Paper

Abstract

The amount of controversial issues being discussed on the Web has been growing dramatically. In articles, blogs, and wikis, people express their points of view in the form of arguments, i.e., claims that are supported by evidence. Discovery of arguments has a large potential for informing decision-making. However, argument discovery is hindered by the sheer amount of available Web data and its unstructured, free-text representation. The former calls for automatic text-mining approaches, whereas the latter implies a need for manual processing to extract the structure of arguments. In this paper, we propose a crowdsourcing-based approach to build a corpus of arguments, an argumentation base, thereby mediating the trade-off of automatic text-mining and manual processing in argument discovery. We develop an end-to-end process that minimizes the crowd cost while maximizing the quality of crowd answers by: (1) ranking argumentative texts, (2) pro-actively eliciting user input to extract arguments from these texts, and (3) aggregating heterogeneous crowd answers. Our experiments with real-world datasets highlight that our method discovers virtually all arguments in documents when processing only 25% of the text with more than 80% precision, using only 50% of the budget consumed by a baseline algorithm.

Keywords

Crowdsourcing Graphical models Web mining 

References

  1. 1.
    McAfee, A., Brynjolfsson, E.: Big data: the management revolution. Harv. Bus. Rev. 90, 60–66 (2012)Google Scholar
  2. 2.
    Yang, Z., Li, Y., Cai, J., Nyberg, E.: Quads: question answering for decision support. In: SIGIR, pp. 375–384 (2014)Google Scholar
  3. 3.
    Yuan, T., Moore, D., Grierson, A.: A human-computer debating system and its dialogue strategies. Int. J. Intell. Syst. 22, 133–156 (2007)CrossRefGoogle Scholar
  4. 4.
    Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. In: IJMMS pp. 697–718 (2003)Google Scholar
  5. 5.
    Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. In: CSCW, pp. 241–250 (2000)Google Scholar
  6. 6.
  7. 7.
    Moens, M.F., Boiy, E., Palau, R.M., Reed, C.: Automatic detection of arguments in legal texts. In: ICAIL, pp. 225–230 (2007)Google Scholar
  8. 8.
    Palau, R., Moens, M.: Argumentation mining: the detection, classification and structure of arguments in text. In: ICAIL, pp. 98–107 (2009)Google Scholar
  9. 9.
    Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast: Evaluating non-expert annotations for natural language tasks. In: EMNLP, pp. 254–263 (2008)Google Scholar
  10. 10.
    Freeley, A.J., Steinberg, D.L.: Argumentation and Debate. Springer, Berlin (2013)Google Scholar
  11. 11.
    Aharoni, E., Polnarov, A., Lavee, T., Hershcovich, D.: A benchmark dataset for automatic detection of claims and evidence in the context of controversial topics. In: ACL, pp. 64–68 (2014)Google Scholar
  12. 12.
    Besnard, P., Hunter, A.: Elements of Argumentation, vol. 47. MIT press, Cambridge (2008)CrossRefGoogle Scholar
  13. 13.
    Sycara, K.: Persuasive argumentation in negotiation. In: Theory and Decision, pp. 203–242 (1990)Google Scholar
  14. 14.
    Werder, A.: Argumentation rationality of management decisions. In: Organization Science, pp. 672–690 (1999)Google Scholar
  15. 15.
    Mochales, R., Ieven, A.: Creating an argumentation corpus: do theories apply to real arguments?: a case study on the legal argumentation of the echr. In: ICAIL, pp. 21–30 (2009)Google Scholar
  16. 16.
    Nguyen, H.V., Litman, D.J.: Extracting argument and domain words for identifying argument components in texts. In: ArgMining, pp. 22–28 (2015)Google Scholar
  17. 17.
    Wolf, F., Gibson, E.: Coherence in Natural Language: Data Structures and Applications. MIT Press, Cambridge (2006)Google Scholar
  18. 18.
    Feng, V.W., Hirst, G.: A linear-time bottom-up discourse parser with constraints and post-editing. In: ACL, pp. 511–521 (2014)Google Scholar
  19. 19.
    Cabrio, E., Villata, S.: Natural language arguments: a combined approach. ECAI 242, 205–210 (2012)zbMATHGoogle Scholar
  20. 20.
    Dagan, I., Dolan, B., Magnini, B., Roth, D.: Recognizing textual entailment: Rational, evaluation and approaches. Nat. Lang. Engi. 15(4), i–xvii (2009)Google Scholar
  21. 21.
    Horton, J.J., Chilton, L.B.: The labor economics of paid crowdsourcing. In: EC, pp. 209–218 (2010)Google Scholar
  22. 22.
    Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., Singer, Y.: Online passive-aggressive algorithms. In: JMLR, 551–585 (2006)Google Scholar
  23. 23.
    Settles, B.: Active learning literature survey. In: Technical Report. UW-Madison (2010)Google Scholar
  24. 24.
    Prechelt, L.: Early stopping-but when? In: Neural Networks: Tricks of the Trade, pp. 55–69. Springer (1998)Google Scholar
  25. 25.
    Li, X., Dong, X.L., Lyons, K., Meng, W., Srivastava, D.: Truth finding on the deep web: is the problem solved? In: VLDB, pp. 97–108 (2013)Google Scholar
  26. 26.
    Tan, P.N., Steinbach, M., Kumar, V., et al.: Introduction to data mining, vol. 1. Pearson Education India, London (2006)Google Scholar
  27. 27.
    Drosou, M., Pitoura, E.: Disc diversity: result diversification based on dissimilarity and coverage. In: VLDB, pp. 13–24 (2012)Google Scholar
  28. 28.
    Zhang, H., Law, E., Miller, R., Gajos, K., Parkes, D., Horvitz, E.: Human computation tasks with global constraints. In: CHI, pp. 217–226 (2012)Google Scholar
  29. 29.
    Eickhoff, C., de Vries, A.: How crowdsourcable is your task. In: CSDM, pp. 11–14 (2011)Google Scholar
  30. 30.
    Kittur, A., Chi, E.H., Suh, B.: Crowdsourcing user studies with mechanical turk. In: CHI, pp. 453–456 (2008)Google Scholar
  31. 31.
    Kulkarni, A., Can, M., Hartmann, B.: Collaboratively crowdsourcing workflows with turkomatic. In: CSCW, pp. 1003–1012 (2012)Google Scholar
  32. 32.
    von Ahn, L.: Human computation. In: DAC, pp. 418 –419 (2009)Google Scholar
  33. 33.
    Mikheev, A.: Tagging sentence boundaries. In: NAACL, pp. 264–271 (2000)Google Scholar
  34. 34.
    Banko, M., Cafarella, M.J., Soderland, S., Broadhead, M., Etzioni, O.: Open information extraction for the web. IJCAI 7, 2670–2676 (2007)Google Scholar
  35. 35.
    Ernst, P., Meng, C., Siu, A., Weikum, G.: Knowlife: a knowledge graph for health and life sciences. In: ICDE, pp. 1254–1257 (2014)Google Scholar
  36. 36.
    Hung, N.Q.V., Tam, N.T., Tran, L.N., Aberer, K.: An evaluation of aggregation techniques in crowdsourcing. In: WISE, pp. 1–15 (2013)Google Scholar
  37. 37.
    Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using EM. J. R. Stat. Soc., 20–28 (1979)Google Scholar
  38. 38.
    Kschischang, F.R., Frey, B.J., Loeliger, H.A.: Factor graphs and the sum-product algorithm. TIT 47, 498–519 (2001)MathSciNetzbMATHGoogle Scholar
  39. 39.
    Gao, J., Li, Q., Zhao, B., Fan, W., Han, J.: Truth discovery and crowdsourcing aggregation: a unified perspective. VLDB 8, 2048–2049 (2015)Google Scholar
  40. 40.
    Zhang, C., Ré, C.: Towards high-throughput gibbs sampling at scale: a study across storage managers. In: SIGMOD, pp. 397–408 (2013)Google Scholar
  41. 41.
    Oyama, S., Baba, Y., Sakurai, Y., Kashima, H.: Accurate integration of crowdsourced labels using workers’ confidence scores. In: IJCAI, pp. 2554–2560 (2013)Google Scholar
  42. 42.
    Krippendorff, K.: Content Analysis: An Introduction to its Methodology. Sage, Thousand Oaks (2012)Google Scholar
  43. 43.
    Kazai, G., Kamps, J., Milic-Frayling, N.: Worker types and personality traits in crowdsourcing relevance labels. In: CIKM, pp. 1941–1944 (2011)Google Scholar
  44. 44.
  45. 45.
    Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on amazon mechanical turk. In: HCOMP, pp. 64–67 (2010)Google Scholar
  46. 46.
    Zhou, D., Liu, Q., Platt, J.C., Meek, C.: Aggregating ordinal labels from crowds by minimax conditional entropy. In: ICML, pp. 262–270 (2014)Google Scholar
  47. 47.
    Welinder, P., Perona, P.: Online crowdsourcing: rating annotators and obtaining cost-effective labels. In: CVPRW, pp. 25–32 (2010)Google Scholar
  48. 48.
    Bernstein, M.S., Little, G., Miller, R.C., Hartmann, B., Ackerman, M.S., Karger, D.R., Crowell, D., Panovich, K.: Soylent: a word processor with a crowd inside. In: UIST, pp. 313–322 (2010)Google Scholar
  49. 49.
    Marcus, A., Wu, E., Karger, D., Madden, S., Miller, R.: Human-powered sorts and joins. In: VLDB, pp. 13–24 (2011)Google Scholar
  50. 50.
    Amsterdamer, Y., Grossman, Y., Milo, T., Senellart, P.: Crowd mining. In: SIGMOD, pp. 241–252 (2013)Google Scholar
  51. 51.
    Lee, K., Caverlee, J., Webb, S.: The social honeypot project: protecting online communities from spammers. In: WWW, pp. 1139–1140 (2010)Google Scholar
  52. 52.
    Parameswaran, A., Sarma, A.D., Garcia-Molina, H., Polyzotis, N., Widom, J.: Human-assisted graph search: it’s okay to ask questions. In: VLDB, pp. 267–278 (2011)Google Scholar
  53. 53.
    Demartini, G., Difallah, D.E., Cudré-Mauroux, P.: Zencrowd: Leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In: WWW, pp. 469–478 (2012)Google Scholar
  54. 54.
    Liu, Q., Peng, J., Ihler, A.T.: Variational inference for crowdsourcing. In: NIPS, pp. 692–700 (2012)Google Scholar
  55. 55.
    Wick, M., McCallum, A., Miklau, G.: Scalable probabilistic databases with factor graphs and mcmc. In: VLDB, pp. 794–804 (2010)Google Scholar
  56. 56.
    Mozafari, B., Sarkar, P., Franklin, M., Jordan, M., Madden, S.: Scaling up crowd-sourcing to very large datasets: a case for active learning. In: VLDB, pp. 125–136 (2014)Google Scholar
  57. 57.
    Azaria, A., Aumann, Y., Kraus, S.: Automated agents for reward determination for human work in crowdsourcing. In: AAMAS, pp. 1–22 (2013)Google Scholar
  58. 58.
    Faradani, S., Hartmann, B., Ipeirotis, P.G.: What’s the right price? Pricing tasks for finishing on time. In: HCOMP, pp. 26–31 (2011)Google Scholar
  59. 59.
    Jeffery, S.R., Franklin, M.J., Halevy, A.Y.: Pay-as-you-go user feedback for dataspace systems. In: SIGMOD, pp. 847–860 (2008)Google Scholar
  60. 60.
    Nguyen, Q.V.H., Nguyen, T.T., Miklós, Z., Aberer, K., Gal, A., Weidlich, M.: Pay-as-you-go reconciliation in schema matching networks. In: ICDE, pp. 220–231 (2014)Google Scholar
  61. 61.
    Dong, X.L., Naumann, F.: Data fusion: resolving data conflicts for integration. In: VLDB, pp. 1654–1655 (2009)Google Scholar
  62. 62.
    Galland, A., Abiteboul, S., Marian, A., Senellart, P.: Corroborating information from disagreeing views. In: WSDM, pp. 131–140 (2010)Google Scholar
  63. 63.
    Wu, Y., Agarwal, P.K., Li, C., Yang, J., Yu, C.: Toward computational fact-checking. In: VLDB, pp. 589–600 (2014)Google Scholar
  64. 64.
    Wang, D., Kaplan, L., Le, H., Abdelzaher, T.: On truth discovery in social sensing: a maximum likelihood estimation approach. In: IPSN, pp. 233–244 (2012)Google Scholar
  65. 65.
    Pasternack, J., Roth, D.: Latent credibility analysis. In: WWW, pp. 1009–1020 (2013)Google Scholar
  66. 66.
    Pochampally, R., Das Sarma, A., Dong, X.L., Meliou, A., Srivastava, D.: Fusing data with correlations. In: SIGMOD, pp. 433–444 (2014)Google Scholar
  67. 67.
    Zhao, B., Rubinstein, B.I., Gemmell, J., Han, J.: A bayesian approach to discovering truth from conflicting sources for data integration. In: VLDB, pp. 550–561 (2012)Google Scholar
  68. 68.
    Prasad, R., Dinesh, N., Lee, A., Miltsakaki, E., Robaldo, L., Joshi, A.K., Webber, B.L.: The penn discourse treebank 2.0. In: LREC (2008)Google Scholar
  69. 69.
    Mullen, T., Malouf, R.: A preliminary investigation into sentiment analysis of informal political discourse. In: AAAI, pp. 159–162 (2006)Google Scholar
  70. 70.
    Anand, P., Walker, M., Abbott, R., Tree, J.E.F., Bowmani, R., Minor, M.: Cats rule and dogs drool!: classifying stance in online debate. In: ACL-WASSA, pp. 1–9 (2011)Google Scholar
  71. 71.
    Hearst, M.A., Dumais, S.T., Osman, E., Platt, J., Scholkopf, B.: Support vector machines. In: EXPERT pp. 18–28 (1998)Google Scholar
  72. 72.
    McCallum, A., Nigam, K., et al.: A comparison of event models for naive bayes text classification. AAAI-TEXTCAT 752, 41–48 (1998)Google Scholar
  73. 73.
    Dumais, S.T.: Latent semantic analysis. In: ARIST pp. 188–230 (2004)Google Scholar
  74. 74.
    Bex, F., Prakken, H., Reed, C., Walton, D.: Towards a formal account of reasoning about evidence: argumentation schemes and generalisations. In: AIL pp. 125–165 (2003)Google Scholar
  75. 75.
    Walton, D., Reed, C.: Argumentation schemes and defeasible inferences. In: ECAI-CMNA, pp. 11–20 (2002)Google Scholar
  76. 76.
    Mitray, M., Singhalz, A., Buckleyyy, C.: Automatic text summarization by paragraph extraction. Compare 22215, 26 (1997)Google Scholar
  77. 77.
    Nenkova, A., McKeown, K.: A survey of text summarization techniques. In: Mining Text Data, pp. 43–76 (2012)Google Scholar
  78. 78.
    Deshpande, O., Lamba, D.S., Tourn, M., Das, S., Subramaniam, S., Rajaraman, A., Harinarayan, V., Doan, A.: Building, maintaining, and using knowledge bases: a report from the trenches. In: SIGMOD, pp. 1209–1220 (2013)Google Scholar
  79. 79.
  80. 80.
  81. 81.
  82. 82.
    DeRose, P., Shen, W., Chen, F., Doan, A., Ramakrishnan, R.: Building structured web community portals: a top-down, compositional, and incremental approach. In: VLDB, pp. 399–410 (2007)Google Scholar
  83. 83.
    Hung, N.Q.V., Thang, D.C., Weidlich, M., Aberer, K.: Minimizing efforts in validating crowd answers. In: SIGMOD, pp. 999–1014 (2015)Google Scholar
  84. 84.
    Brants, T., Chen, F., Tsochantaridis, I.: Topic-based document segmentation with probabilistic latent semantic analysis. In: CIKM, pp. 211–218 (2002)Google Scholar
  85. 85.
    Hearst, M.A.: Texttiling: segmenting text into multi-paragraph subtopic passages. Comput. Linguist. 23, 33–64 (1997)Google Scholar
  86. 86.
    Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77, 321–357 (1995)Google Scholar
  87. 87.
    Nguyen, H.Q.V., Luong, X.H., Miklós, Z., Quan, T.T., Aberer, K.: Collaborative schema matching reconciliation. In: CoopIS, pp. 222–240 (2013)Google Scholar
  88. 88.
    Li, Y., Li, Q., Gao, J., Su, L., Zhao, B., Fan, W., Han, J.: On the discovery of evolving truth. In: KDD, pp. 675–684 (2015)Google Scholar
  89. 89.
    Chiang, Y.H., Doan, A., Naughton, J.F.: Modeling entity evolution for temporal record matching. In: SIGMOD, pp. 1175–1186 (2014)Google Scholar
  90. 90.
    Xu, L., Yan, P., Chang, T.: Best first strategy for feature selection. In: ICPR, pp. 706–708 (1988)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2017

Authors and Affiliations

  • Quoc Viet Hung Nguyen
    • 1
    Email author
  • Chi Thang Duong
    • 2
  • Thanh Tam Nguyen
    • 2
  • Matthias Weidlich
    • 3
  • Karl Aberer
    • 2
  • Hongzhi Yin
    • 1
  • Xiaofang Zhou
    • 1
    • 4
  1. 1.The University of QueenslandBrisbaneAustralia
  2. 2.École Polytechnique Fédérale de LausanneLausanneSwitzerland
  3. 3.Humboldt-Universität zu BerlinBerlinGermany
  4. 4.Macau University of Science and TechnologyMacauChina

Personalised recommendations