Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13518))

Included in the following conference series:

  • 1555 Accesses

Abstract

Interactive labeling supports manual image labeling by presenting system predictions for users to fix errors. However, existing labeling methods do not effectively consider image difficulty, which may affect system predictions and user labeling. We introduce ConfLabeling, a confidence-based labeling interface that represents image difficulties as user and system confidence. This interface allows users to give a confidence score to each label assignment (user confidence), and our system visualizes the results of predictions with confidence levels (system confidence). We expect user confidence to improve system prediction, and system confidence would help users quickly and correctly identify the images that need to be inspected. We conducted a user study to compare our proposed confidence-based interface with a conventional non-confidence interface in an interactive image labeling task of varying difficulty. The results indicate that the proposed confidence-based interface achieved higher classification accuracy than a non-confidence interface when the image was not too difficult.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  2. Karpathy, A.: What I learned from competing against a convnet on imagenet (2014). http://karpathy.github.io/2014/09/02/what-i-learned-from-competing-against-a-convnet-on-imagenet/

  3. Gong, C., Tao, D., Maybank, S.J., Liu, W., Kang, G., Yang, J.: Multi-modal curriculum learning for semi-supervised image classification. IEEE Trans. Image Process. 25(7), 3249–3260 (2016). https://doi.org/10.1109/TIP.2016.2563981

    Article  MathSciNet  MATH  Google Scholar 

  4. Wang, J., et al.: Learning fine-grained image similarity with deep ranking. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 13861393 (2014). https://doi.org/10.1109/CVPR.2014.180

  5. Fan, L., Zhao, H., Zhao, H., Liu, P.-P., Huangshui, H.: Image retrieval based on learning to rank and multiple loss. ISPRS Int. J. Geo Inf. 8, 393 (2019)

    Article  Google Scholar 

  6. Settles, B.: Active Learning Literature Survey. Computer Sciences Technical Report1648, University of Wisconsin–Madison (2009)

    Google Scholar 

  7. desJardins, M., MacGlashan, J., Ferraioli, J.: Interactive visual clustering. In: Proceedings of the 12th International Conference on Intelligent User Inter-faces, IUI ’07, pp. 361–364, New York, NY, USA. Association for Computing Machinery (2007)

    Google Scholar 

  8. Hemmer, P., Kühl, N., Schӧffer, J.: Deal: Deep evidential active learning for image classification. In: 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 865–870 (2020)

    Google Scholar 

  9. Wang, K., Zhang, D., Li, Y., Zhang, R., Lin, L.: Cost-effective active learning for deep image classification. IEEE Trans. Circuits Syst. Video Technol. 27, 2591–2600 (2017)

    Article  Google Scholar 

  10. Heimerl, F., Koch, S., Bosch, H., Ertl, T.: Visual classifier training for text document retrieval. IEEE Trans. Visual Comput. Graphics 18, 2839–2848 (2012)

    Article  Google Scholar 

  11. Hӧferlin, B., Netzel, R., Hӧferlin, M., Weiskopf, D., Heidemann, G.: Interactive learning of ad-hoc classifiers for video visual analytics. In: IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 23–32 (2012)

    Google Scholar 

  12. von Ahn, L., Dabbish, : Labeling images with a computer game. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’04, pp. 319–326, New York, NY, USA (2004)

    Google Scholar 

  13. Chang, C.-M., Lee, C.-H., Igarashi, T.: Spatial labeling: leveraging spatial layout for improving label quality in non-expert image annotation. In: CHI Conference on Human Factors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan. ACM, New York, NY, USA (2021). https://doi.org/10.1145/3411764.3445165

  14. Wang, M., Ji, D., Tian, Q., Hua, X.-S.: Intelligent photo clustering with user interaction and distance metric learning. Pattern Recogn. Lett. 33(4), 462–470 (2012)

    Article  Google Scholar 

  15. Bruneau, P., Otjacques, B.: An interactive, example-based, visual clus-tering system. In: 2013 17th International Conference on Information Visualisation, pp. 168–173 (2013)

    Google Scholar 

  16. Jose, G.S., Paiva, W.R., Schwartz, H.P., Minghim, R.: An approach to supporting incremental visual data classification. IEEE Trans. Visual Comput. Graphics 21(1), 4–17 (2015)

    Article  Google Scholar 

  17. Ishida, T., Niu, G., Sugiyama, M.: Binary classification from positive-confidence data. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18, page 5921–5932, Red Hook, NY, USA,2018. Curran Associates Inc. (2018)

    Google Scholar 

  18. Zhang, X., Zhu, X., Wright, S.: Training set debugging using trusted items. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  19. Oyama, S., Baba, Y., Sakurai, Y., Kashima, H.: Accurate in-tegration of crowdsourced labels using workers’ self-reported confidence scores. In: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, IJCAI ’13, pp. 2554–2560. AAAI Press (2013)

    Google Scholar 

  20. Song, J., Wang, H., Gao, Y., An, B.: Active learning with confidence-basedanswers for crowdsourcing labeling tasks. Knowl.-Based Syst. 159, 244–258 (2018)

    Article  Google Scholar 

  21. Chiang, C.-C.: Interactive tool for image annotation using a semi-supervised and hierarchical approach. Computer Standards & Interfaces 35(1), 50–58 (2013)

    Google Scholar 

  22. Lai, H.P., Visani, M., Boucher, A., Ogier, J.-M.: A new inter-active semi-supervised clustering model for large image database indexing. Pattern Recogn. Lett. 37, 94–106 (2014)

    Article  Google Scholar 

  23. Xiang, S., et al.: Interactive correction of mislabeled training data. In: 2019 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 57–68 (2019)

    Google Scholar 

  24. Liu, S., Chen, C., Lu, Y.F., Ouyang, F., Wang, B.: An interactive method to improve crowdsourced annotations. IEEE Transactions on Visualization and Computer Graphics 25(1), 235–245 (2019)

    Google Scholar 

  25. Maaten, L.V.D., Hinton, G.E.: Visualizing data using t-sne. J. Mach. Learn. Res. 9, 2579–2605 (2008)

    MATH  Google Scholar 

  26. sklearn.ensemble.randomforestclassifier. https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html

  27. Robert (Munro) Monarch. Uncertainty sampling cheatsheet (2019). https://towardsdatascience.com/uncertainty-sampling-cheatsheet-ec57bc067c0b

  28. Chang, C.-M., Mishra, S.D., Igarashi, T.: A hierarchical task assignment for manual image labeling. In: 2019 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp. 139–143 (2019). http://dx.doi.org/https://doi.org/10.1109/VLHCC.2019.8818828

  29. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-Mnist: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. ArXiv, abs/1708.07747 (2017)

    Google Scholar 

  30. Clanuwat, T., Bober-Irizar, M., Kitamoto, A., Lamb, A., Yamamoto, K., Ha, D.: Deep Learning for Classical Japanese Literature. ArXiv, abs/1812.01718 (2018)

    Google Scholar 

  31. Chang, C.-M., He, Y., Yang, X., Xie, H., Igarashi, T.: DualLabel: secondary labels for challenging image annotation. In: The 48th International Conference on Graphics Interface and Human-Computer Interaction (Gl 2022), Montreal, QC, Canada, 17–19 May 2022 (2022)

    Google Scholar 

Download references

Acknowledgements

This work was supported by JST CREST Grant Number JP- MJCR17A1, Japan.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chia-Ming Chang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lu, Y., Chang, CM., Igarashi, T. (2022). ConfLabeling: Assisting Image Labeling with User and System Confidence. In: Chen, J.Y.C., Fragomeni, G., Degen, H., Ntoa, S. (eds) HCI International 2022 – Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence. HCII 2022. Lecture Notes in Computer Science, vol 13518. Springer, Cham. https://doi.org/10.1007/978-3-031-21707-4_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21707-4_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21706-7

  • Online ISBN: 978-3-031-21707-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics