Advertisement

Active Learning with c-Certainty

  • Eileen A. Ni
  • Charles X. Ling
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7301)

Abstract

It is well known that the noise in labels deteriorates the performance of active learning. To reduce the noise, works on multiple oracles have been proposed. However, there is still no way to guarantee the label quality. In addition, most previous works assume that the noise level of oracles is evenly distributed or example-independent which may not be realistic. In this paper, we propose a novel active learning paradigm in which oracles can return both labels and confidences. Under this paradigm, we then propose a new and effective active learning strategy that can guarantee the quality of labels by querying multiple oracles. Furthermore, we remove the assumptions of the previous works mentioned above, and design a novel algorithm that is able to select the best oracles to query. Our empirical study shows that the new algorithm is robust, and it performs well with given different types of oracles. As far as we know, this is the first work that proposes this new active learning paradigm and an active learning algorithm in which label quality is guaranteed.

Keywords

Active learning multiple oracles noisy data 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Balcan, M., Beygelzimer, A., Langford, J.: Agnostic active learning. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 65–72. ACM (2006)Google Scholar
  2. 2.
    Settles, B.: Active Learning Literature Survey. Machine Learning 15(2), 201–221 (1994)Google Scholar
  3. 3.
    Sheng, V., Provost, F., Ipeirotis, P.: Get another label? improving data quality and data mining using multiple, noisy labelers. In: Proceeding of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 614–622. ACM (2008)Google Scholar
  4. 4.
    Donmez, P., Carbonell, J., Schneider, J.: Efficiently learning the accuracy of labeling sources for selective sampling. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259–268. ACM (2009)Google Scholar
  5. 5.
    Raykar, V., Yu, S., Zhao, L., Jerebko, A., Florin, C., Valadez, G., Bogoni, L., Moy, L.: Supervised Learning from Multiple Experts: Whom to trust when everyone lies a bit. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 889–896. ACM (2009)Google Scholar
  6. 6.
    Snow, R., O’Connor, B., Jurafsky, D., Ng, A.: Cheap and fast—but is it good?: evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 254–263. Association for Computational Linguistics (2008)Google Scholar
  7. 7.
    Sorokin, A., Forsyth, D.: Utility data annotation with amazon mechanical turk. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2008, pp. 1–8. IEEE (2008)Google Scholar
  8. 8.
    Zheng, Y., Scott, S., Deng, K.: Active learning from multiple noisy labelers with varied costs. In: 2010 IEEE International Conference on Data Mining, pp. 639–648. IEEE (2010)Google Scholar
  9. 9.
    Du, J., Ling, C.: Active learning with human-like noisy oracle. In: 2010 IEEE International Conference on Data Mining, pp. 797–802. IEEE (2010)Google Scholar
  10. 10.
    Lewis, D., Gale, W.: A sequential algorithm for training text classifiers. In: Proceedings of the 17th Annual International ACM SIGIR Conference, pp. 3–12. Springer-Verlag New York, Inc. (1994)Google Scholar
  11. 11.
    Roy, N., McCallum, A.: Toward optimal active learning through sampling estimation of error reduction. In: Machine Learning-International Workshop then Conference, pp. 441–448. Citeseer (2001)Google Scholar
  12. 12.
    Settles, B., Craven, M., Ray, S.: Multiple-instance active learning. In: Advances in Neural Information Processing Systems (NIPS). Citeseer (2008)Google Scholar
  13. 13.
    WEKA Machine Learning Project, “Weka”, http://www.cs.waikato.ac.nz/~ml/weka
  14. 14.
    Asuncion, A., Newman, D.: UCI machine learning repository (2007), http://www.ics.uci.edu/~mlearn/mlrepository.html

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Eileen A. Ni
    • 1
  • Charles X. Ling
    • 1
  1. 1.Department of Computer ScienceThe University of Western OntarioLondonCanada

Personalised recommendations