Advertisement

Calculating Error Bars on Inferences from Web Data

  • Kwabena NuamahEmail author
  • Alan Bundy
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 869)

Abstract

In this work, we explore uncertainty in automated question answering over real-valued data from knowledge bases on the Internet. We argue that the coefficient of variation (cov) is an intuitive and general form in which to express this uncertainty, with the added advantage, it can be calculated exactly and efficiently. The large amounts of data on the Internet presents a good opportunity to answer queries that go beyond simply looking up facts and returning them. However, such data is often vague and noisy. For discrete results, e.g. stating that a particular city is the capital of a particular country, probabilities are a natural way to assign uncertainty to answers. For continuous variables or quantities that are typically treated as continuous (such as populations of countries), probabilities are uninformative, being infinitesimal. For instance, the probability that the population of India is exactly equal to last census count is effectively zero. Our aim is to capture uncertainty in these estimates in an intuitive, uniform, and computationally efficient way. We present initial efforts at automating the inference process over real-valued web data while accounting for some of the typical sources of uncertainty: noisy data and errors from inference operations. Having considered several problem domains and query types, we find that approximating all continuous random variables with Gaussian distributions, and communicating uncertainties to users as coefficients of variation. Our experiments show that the estimates of uncertainty derived by our method are well-calibrated and correlate with the actual deviations from the true answer. An immediate benefit of our approach is that our inference framework can attach credible intervals to real-valued answers that it infers. This conveys to a user the plausible magnitudes of the error in the answer, a meaningful measure of uncertainty compared to ranking scores provided in other question answering systems.

Keywords

Query answering Credible intervals Uncertainty Bayesian inference Coefficient of variation 

References

  1. 1.
    Nuamah, K.., Bundy, A., Lucas, C.: Functional inferences over heterogeneous data. In: International Conference on Web Reasoning and Rule Systems, pp. 159–166. Springer (2016)Google Scholar
  2. 2.
    Nilsson, N.J.: Probabilistic logic. Artif. Intell. 28(1), 71–87 (1986)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Beckett, D., McBride, B.: Rdf/xml syntax specification (revised). W3C Recommendation, vol. 10 (2004)Google Scholar
  4. 4.
    Bray, T.: The Javascript object notation (JSON) data interchange format (2017)Google Scholar
  5. 5.
    Bizer, C., Heath, T., Berners-Lee, T.: Linked data-the story so far. In: Semantic Services, Interoperability and Web Applications: Emerging Concepts, pp. 205–227 (2009)Google Scholar
  6. 6.
    Winkler, W.E.: The state of record linkage and current research problems. In: Statistical Research Division. US Census Bureau, Citeseer (1999)Google Scholar
  7. 7.
    Cohn, A.G., Hazarika, S.M.: Qualitative spatial representation and reasoning: an overview. Fundam. Informaticae 46(1–2), 1–29 (2001)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Markov, K., Extending the rich inference framework in the energy domain. 4th Year Project Report Computer Science. University of Edinburgh, School of Informatics (2017)Google Scholar
  9. 9.
    Ferrucci, D., Brown, E., Chu-Carroll, J., Fan, J., Gondek, D., Kalyanpur, A.A., Lally, A., Murdock, J.W., Nyberg, E., Prager, J.: Building watson: an overview of the DeepQA project. AI Mag. 31(3), 59–79 (2010)CrossRefGoogle Scholar
  10. 10.
    Murdock, J.W., Fan, J., Lally, A., Shima, H., Boguraev, B.: Textual evidence gathering and analysis. IBM J. Res. Dev. 56(3.4), 8–1 (2012)Google Scholar
  11. 11.
    Brill, E., Lin, J.J., Banko, M., Dumais, S.T., Ng, A.Y., et al.: Data-intensive question answering. In: TREC 56, 90 (2001)Google Scholar
  12. 12.
    Banko, M., Brill, E., Dumais, S., Lin, J., Way, M.: AskMSR: question answering using the worldwide web. In: Proceedings of 2002 AAAI Spring Symposium on Mining Answers, pp. 1–2 (2002)Google Scholar
  13. 13.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B (Methodol.), 1–38 (1977)Google Scholar
  14. 14.
    Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)zbMATHGoogle Scholar
  15. 15.
    Williams, C.K., Rasmussen, C.E.: Gaussian processes for regression. In: Advances in Neural Information Processing Systems, pp. 514–520 (1996)Google Scholar
  16. 16.
    Rasmussen, C.E.: Gaussian Processes for Machine Learning (2006)Google Scholar
  17. 17.
    Vrandečić, D., Krötzsch, M.: Wikidata: a free collaborative knowledgebase. Commun. ACM 57, 78–85 (2014)CrossRefGoogle Scholar
  18. 18.
    Auer, S., Bizer, C., Kobilarov, G., Lehmann, J., Cyganiak, R., Ives, Z.: DBpedia: a nucleus for a web of open data. In: The Semantic Web, pp. 722–735 (2007)CrossRefGoogle Scholar
  19. 19.
    Speer, R., Havasi, C.: ConceptNet 5: a large semantic network for relational knowledge. In: The Peoples Web Meets NLP, pp. 161–176. Springer (2013)Google Scholar
  20. 20.
    Miller, G.A.: WordNet: a lexical database for english. Commun. ACM 38(11), 39–41 (1995)CrossRefGoogle Scholar
  21. 21.
    Bellman, R.E., Zadeh, L.A.: Local and fuzzy logics. In: Modern Uses of Multiple-Valued Logic, pp. 103–165. Springer (1977)Google Scholar
  22. 22.
    Zadeh, L.A.: Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy Sets Syst. 90(2), 111–127 (1997)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Kersting, K., De Raedt, L.: Basic Principles of Learning Bayesian Logic Programs. University of Freiburg, Citeseer, Institute for Computer Science (2002)zbMATHGoogle Scholar
  24. 24.
    Ko, J., Si, L., Nyberg, E.: Combining evidence with a probabilistic framework for answer ranking and answer merging in question answering. Inf. Process. Manag. 46(5), 541–554 (2010)CrossRefGoogle Scholar
  25. 25.
    Lopez, V., Fernández, M., Motta, E., Stieler, N.: PowerAqua: supporting users in querying and exploring the semantic web. Semant. Web 3(3), 249–265 (2012)Google Scholar
  26. 26.
    Preda, N., Kasneci, G., Suchanek, F.M., Neumann, T., Yuan, W., Weikum, G.: Active knowledge: dynamically enriching RDF knowledge bases by web services, pp. 399–410 (2010)Google Scholar
  27. 27.
    Xu, K., Zhang, S., Feng, Y., Zhao, D.: Answering natural language questions via phrasal semantic parsing. In: Natural Language Processing and Chinese Computing, pp. 333–344. Springer (2014)Google Scholar
  28. 28.
    Dima, C.: Answering natural language questions with intui3. In: CLEF (Working Notes), pp. 1201–1211 (2014)Google Scholar
  29. 29.
    Widom, J.: Trio: A system for integrated management of data, accuracy, and lineage. Technical Report, Stanford InfoLab (2004)Google Scholar
  30. 30.
    Chaudhuri, S., Ding, B., Kandula, S.: Approximate query processing: no silver bullet. In: Proceedings of the 2017 ACM International Conference on Management of Data, pp. 511–519 ACM (2017)Google Scholar
  31. 31.
    Babcock, B., Chaudhuri, S., Das, G.: Dynamic sample selection for approximate query processing. In: Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data. pp. 539–550 ACM (2003)Google Scholar
  32. 32.
    Agarwal, S., Milner, H., Kleiner, A., Talwalkar, A., Jordan, M., Madden, S., Mozafari, B., Stoica, I.: Knowing when you’re wrong: building fast and reliable approximate query processing systems. In: Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data, pp. 481–492 ACM (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of InformaticsUniversity of EdinburghEdinburghUK

Personalised recommendations