Advertisement

Targeted Feedback Collection Applied to Multi-Criteria Source Selection

  • Julio César Cortés Ríos
  • Norman W. Paton
  • Alvaro A. A. Fernandes
  • Edward Abel
  • John A. Keane
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10509)

Abstract

A multi-criteria source selection (MCSS) scenario identifies, from a set of candidate data sources, the subset that best meets a user’s needs. These needs are expressed using several criteria, which are used to evaluate the candidate data sources. A MCSS problem can be solved using multi-dimensional optimisation techniques that trade-off the different objectives. Sometimes we may have uncertain knowledge regarding how well the candidate data sources meet the criteria. In order to overcome this uncertainty, we may rely on end users or crowds to annotate the data items produced by the sources in relation to the selection criteria. In this paper, we introduce an approach called Targeted Feedback Collection (TFC), which aims to identify those data items on which feedback should be collected, thereby providing evidence on how the sources satisfy the required criteria. TFC targets feedback by considering the confidence intervals around the estimated criteria values. The TFC strategy has been evaluated, with promising results, against other approaches to feedback collection, including active learning, using real-world data sets.

Keywords

Data integration Source selection Feedback collection Pay-as-you-go Multi-objective optimisation 

Notes

Acknowledgement

Julio César Cortés Ríos is supported by the Mexican National Council for Science and Technology (CONACyT). Data integration research at Manchester is supported by the UK EPSRC, through the VADA Programme Grant.

References

  1. 1.
    Belhajjame, K., Paton, N.W., Embury, S.M., Fernandes, A.A.A., Hedeler, C.: Incrementally improving dataspaces based on user feedback. Inf. Syst. 38(5), 656–687 (2013)CrossRefGoogle Scholar
  2. 2.
    Bozzon, A., Brambilla, M., Ceri, S.: Answering search queries with crowdsearcher. In: WWW 2012, Lyon, France, pp. 1009–1018, 16–20 April 2012Google Scholar
  3. 3.
    Bulmer, M.G.: Principles of Statistics. Dover Publications, New York (1979)zbMATHGoogle Scholar
  4. 4.
    Crescenzi, V., Merialdo, P., Qiu, D.: Crowdsourcing large scale wrapper inference. Distrib. Parallel Databases 33(1), 95–122 (2015)CrossRefGoogle Scholar
  5. 5.
    Dong, X.L., Saha, B., Srivastava, D.: Less is more: selecting sources wisely for integration. PVLDB 6(2), 37–48 (2012)Google Scholar
  6. 6.
    Foley, D.H.: Considerations of sample and feature size. IEEE Trans. Inf. Theor. 18(5), 618–626 (1972)CrossRefzbMATHGoogle Scholar
  7. 7.
    Franklin, M., Kossmann, D., Kraska, T., Ramesh, S., Xin, R.: Crowddb: answering queries with crowdsourcing. In: ACM SIGMOD, pp. 61–72 (2011)Google Scholar
  8. 8.
    Halevy, A., Korn, F., Noy, N.F., Olston, C., Polyzotis, N., Roy, S., Whang, S.E.: Goods: organizing google’s datasets. In: ACM SIGMOD, pp. 795–806 (2016)Google Scholar
  9. 9.
    Hung, N.Q.V., Thang, D.C., Weidlich, M., Aberer, K.: Minimizing efforts in validating crowd answers. In: SIGMOD, Australia, pp. 999–1014 (2015)Google Scholar
  10. 10.
    Knezevic, A.: Overlapping confidence intervals and statistical significance. StatNews, Cornell University, Cornell Statistical Consulting Unit 73 (2008)Google Scholar
  11. 11.
    Lewis, D.D., Gale, W.A.: A sequential algorithm for training text classifiers. In: ACM-SIGIR, pp. 3–12 (1994)Google Scholar
  12. 12.
    Lewis, J.R., Sauro, J.: When 100% really isn’t 100%: improving the accuracy of small-sample estimates of completion rates. JUS 3(1), 136–150 (2006)Google Scholar
  13. 13.
    Liu, X., Lu, M., Ooi, B.C., Shen, Y., Wu, S., Zhang, M.: CDAS: a crowdsourcing data analytics system. PVLDB 5(10), 1040–1051 (2012)Google Scholar
  14. 14.
    Mozafari, B., Sarkar, P., Franklin, M.J., Jordan, M.I., Madden, S.: Scaling up crowd-sourcing to very large datasets: a case for active learning. PVLDB 8(2), 125–136 (2014)Google Scholar
  15. 15.
    Pipino, L.L., Lee, Y.W., Wang, R.Y.: Data quality assessment. Commun. ACM 45(4), 211–218 (2002). Supporting community and building social capital, USACrossRefGoogle Scholar
  16. 16.
    Rekatsinas, T., Deshpande, A., Dong, X.L., Getoor, L., Srivastava, D.: Sourcesight: enabling effective source selection. In: SIGMOD Conference, San Francisco, CA, USA, pp. 2157–2160 (2016). http://doi.acm.org/10.1145/2882903.2899403
  17. 17.
    Rekatsinas, T., Dong, X.L., Getoor, L., Srivastava, D.: Finding quality in quantity: the challenge of discovering valuable sources for integration. In: CIDR (2015)Google Scholar
  18. 18.
    Rekatsinas, T., Dong, X.L., Srivastava, D.: Characterizing and selecting fresh data sources. In: SIGMOD, pp. 919–930 (2014)Google Scholar
  19. 19.
    Ríos, J.C.C., Paton, N.W., Fernandes, A.A.A., Belhajjame, K.: Efficient feedback collection for pay-as-you-go source selection. In: SSDBM, pp. 1:1–1:12 (2016)Google Scholar
  20. 20.
    Settles, B.: Active learning. Synth. Lect. Artif. Intell. Mach. Learn. 6(1), 1–114 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Ting, S.C., Cho, D.I.: An integrated approach for supplier selection and purchasing decisions. Supply Chain Manag. Int. J. 13(2), 116–127 (2008)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Julio César Cortés Ríos
    • 1
  • Norman W. Paton
    • 1
  • Alvaro A. A. Fernandes
    • 1
  • Edward Abel
    • 1
  • John A. Keane
    • 1
  1. 1.School of Computer ScienceUniversity of ManchesterManchesterUK

Personalised recommendations