Advertisement

Continuous Adaptation for Interactive Object Segmentation by Learning from Corrections

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12361)

Abstract

In interactive object segmentation a user collaborates with a computer vision model to segment an object. Recent works employ convolutional neural networks for this task: Given an image and a set of corrections made by the user as input, they output a segmentation mask. These approaches achieve strong performance by training on large datasets but they keep the model parameters unchanged at test time. Instead, we recognize that user corrections can serve as sparse training examples and we propose a method that capitalizes on that idea to update the model parameters on-the-fly to the data at hand. Our approach enables the adaptation to a particular object and its background, to distributions shifts in a test set, to specific object classes, and even to large domain changes, where the imaging modality changes between training and testing. We perform extensive experiments on 8 diverse datasets and show: Compared to a model with frozen parameters, our method reduces the required corrections (i) by 9%–30% when distribution shifts are small between training and testing; (ii) by 12%–44% when specializing to a specific class; (iii) and by 60% and 77% when we completely change domain between training and testing.

Notes

Acknowledgement

We thank Rodrigo Benenson, Jordi Pont-Tuset, Thomas Mensink and Bastian Leibe for their inputs on this work.

Supplementary material

504471_1_En_34_MOESM1_ESM.pdf (6.4 mb)
Supplementary material 1 (pdf 6562 KB)

References

  1. 1.
    Acuna, D., Ling, H., Kar, A., Fidler, S.: Efficient interactive annotation of segmentation datasets with Polygon-RNN++. In: CVPR (2018)Google Scholar
  2. 2.
    Adobe: Select a subject with just one click (2018). https://helpx.adobe.com/photoshop/how-to/select-subject-one-click.html
  3. 3.
    Agustsson, E., Uijlings, J.R., Ferrari, V.: Interactive full image segmentation by considering all regions jointly. In: CVPR (2019)Google Scholar
  4. 4.
    Alcorn, M.A., et al.: Strike (with) a pose: neural networks are easily fooled by strange poses of familiar objects. In: CVPR (2019)Google Scholar
  5. 5.
    Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., Tuytelaars, T.: Memory aware synapses: learning what (not) to forget. In: ECCV (2018)Google Scholar
  6. 6.
    Aljundi, R., Kelchtermans, K., Tuytelaars, T.: Task-free continual learning. In: CVPR (2019)Google Scholar
  7. 7.
    Bai, X., Sapiro, G.: Geodesic matting: a framework for fast interactive image and video segmentation and matting. IJCV 82(2), 113–132 (2009)CrossRefGoogle Scholar
  8. 8.
    Bearman, A., Russakovsky, O., Ferrari, V., Fei-Fei, L.: What’s the point: semantic segmentation with point supervision. In: ECCV (2016)Google Scholar
  9. 9.
    Belouadah, E., Popescu, A.: IL2M: class incremental learning with dual memory. In: ICCV (2019)Google Scholar
  10. 10.
    Benard, A., Gygli, M.: Interactive video object segmentation in the wild. arXiv (2017)Google Scholar
  11. 11.
    Benenson, R., Popov, S., Ferrari, V.: Large-scale interactive object segmentation with human annotators. In: CVPR (2019)Google Scholar
  12. 12.
    Boykov, Y., Jolly, M.P.: Interactive graph cuts for optimal boundary and region segmentation of objects in N-D images. In: ICCV (2001)Google Scholar
  13. 13.
    Caelles, S., Maninis, K.K., Pont-Tuset, J., Leal-Taixé, L., Cremers, D., Van Gool, L.: One-shot video object segmentation. In: CVPR (2017)Google Scholar
  14. 14.
    Carmona, E.J., Rincón, M., García-Feijoó, J., Martínez-de-la Casa, J.M.: Identification of the optic nerve head with genetic algorithms. Artif. Intell. Med. 43(3), 243–259 (2008)CrossRefGoogle Scholar
  15. 15.
    Castrejón, L., Kundu, K., Urtasun, R., Fidler, S.: Annotating object instances with a Polygon-RNN. In: CVPR (2017)Google Scholar
  16. 16.
    Chen, D.J., Chien, J.T., Chen, H.T., Chang, L.W.: Tap and shoot segmentation. In: AAAI (2018)Google Scholar
  17. 17.
    Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: ECCV (2018)Google Scholar
  18. 18.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL visual object classes challenge 2012 (VOC2012) Results (2012). http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html
  19. 19.
    Farquhar, S., Gal, Y.: Towards robust evaluations of continual learning. arXiv (2018)Google Scholar
  20. 20.
    Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML (2017)Google Scholar
  21. 21.
    Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: CVPR (2016)Google Scholar
  22. 22.
    Gulshan, V., Rother, C., Criminisi, A., Blake, A., Zisserman, A.: Geodesic star convexity for interactive image segmentation. In: CVPR (2010)Google Scholar
  23. 23.
    Gygli, M., Norouzi, M., Angelova, A.: Deep value networks learn to evaluate and iteratively refine structured outputs. In: ICML (2017)Google Scholar
  24. 24.
    Hariharan, B., Arbeláez, P., Bourdev, L., Maji, S., Malik, J.: Semantic contours from inverse detectors. In: ICCV (2011)Google Scholar
  25. 25.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015)
  26. 26.
    Hu, Y., Soltoggio, A., Lock, R., Carter, S.: A fully convolutional two-stream fusion network for interactive image segmentation. Neural Netw. 109, 31–42 (2019)CrossRefGoogle Scholar
  27. 27.
    Jang, W.D., Kim, C.S.: Interactive image segmentation via backpropagating refinement scheme. In: CVPR (2019)Google Scholar
  28. 28.
    Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Nat. Acad. Sci. USA 114(13), 3521–3526 (2017)MathSciNetCrossRefGoogle Scholar
  29. 29.
    Kontogianni, T., Gygli, M., Uijlings, J., Ferrari, V.: Continuous adaptation for interactive object segmentation by learning from corrections. arXiv preprint arXiv:1911.12709v1 (2019)
  30. 30.
    Li, Z., Chen, Q., Koltun, V.: Interactive image segmentation with latent diversity. In: CVPR (2018)Google Scholar
  31. 31.
    Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. PAMI 40(12), 2935–2947 (2017)CrossRefGoogle Scholar
  32. 32.
    Liew, J., Wei, Y., Xiong, W., Ong, S.H., Feng, J.: Regional interactive image segmentation networks. In: ICCV (2017)Google Scholar
  33. 33.
    Lin, T.Y., et al.: Microsoft COCO: common objects in context. In: ECCV (2014)Google Scholar
  34. 34.
    Ling, H., Gao, J., Kar, A., Chen, W., Fidler, S.: Fast interactive object annotation with Curve-GCN. In: CVPR (2019)Google Scholar
  35. 35.
    Mahadevan, S., Voigtlaender, P., Leibe, B.: Iteratively trained interactive segmentation. In: BMVC (2018)Google Scholar
  36. 36.
    Majumder, S., Yao, A.: Content-aware multi-level guidance for interactive instance segmentation. In: CVPR (2019)Google Scholar
  37. 37.
    McGuinness, K., O’connor, N.E.: A comparative evaluation of interactive segmentation algorithms. Pattern Recogn. 43(2), 434–444 (2010)Google Scholar
  38. 38.
    Michieli, U., Zanuttigh, P.: Incremental learning techniques for semantic segmentation. In: ICCV Workshop (2019)Google Scholar
  39. 39.
    Papadopoulos, D.P., Uijlings, J.R.R., Keller, F., Ferrari, V.: We don’t need no bounding-boxes: Training object class detectors using only human verification. In: CVPR (2016)Google Scholar
  40. 40.
    Perazzi, F., Pont-Tuset, J., McWilliams, B., Van Gool, L., Gross, M., Sorkine-Hornung, A.: A benchmark dataset and evaluation methodology for video object segmentation. In: CVPR (2016)Google Scholar
  41. 41.
    Price, B.L., Morse, B., Cohen, S.: Geodesic graph cut for interactive image segmentation. In: CVPR (2010)Google Scholar
  42. 42.
    Qi, S., Zhu, Y., Huang, S., Jiang, C., Zhu, S.C.: Human-centric indoor scene synthesis using stochastic grammar. In: CVPR (2018)Google Scholar
  43. 43.
    Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: ICLR (2016)Google Scholar
  44. 44.
    Rebuffi, S., Kolesnikov, A., Sperl, G., Lampert, C.: iCaRL: incremental classifier and representation learning. In: CVPR (2017)Google Scholar
  45. 45.
    Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do CIFAR-10 classifiers generalize to CIFAR-10? arXiv (2018)Google Scholar
  46. 46.
    Rother, C., Kolmogorov, V., Blake, A.: GrabCut - interactive foreground extraction using iterated graph cut. SIGGRAPH 23(3), 309–314 (2004)CrossRefGoogle Scholar
  47. 47.
    Shmelkov, K., Schmid, C., Alahari, K.: Incremental learning of object detectors without catastrophic forgetting. In: ICCV (2017)Google Scholar
  48. 48.
    Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: NeurIPS (2017)Google Scholar
  49. 49.
    Sofiiuk, K., Petrov, I., Barinova, O., Konushin, A.: F-BRS: rethinking backpropagating refinement for interactive segmentation. In: CVPR (2020)Google Scholar
  50. 50.
    Sun, X., Christoudias, C.M., Fua, P.: Free-shape polygonal object localization. In: ECCV (2014)Google Scholar
  51. 51.
    Sun, Y., Wang, X., Liu, Z., Miller, J., Efros, A.A., Hardt, M.: Test-time training for out-of-distribution generalization. arXiv (2019)Google Scholar
  52. 52.
    Voigtlaender, P., Leibe, B.: Online adaptation of convolutional neural networks for video object segmentation. In: BMVC (2017)Google Scholar
  53. 53.
    Xu, N., Price, B., Cohen, S., Yang, J., Huang, T.: Deep interactive object selection. In: CVPR (2016)Google Scholar
  54. 54.
    Xu, N., et al.: YouTube-VOS: a large-scale video object segmentation benchmark. arXiv (2018)Google Scholar
  55. 55.
    Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. In: ICML (2017)Google Scholar
  56. 56.
    Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., Torralba, A.: Scene parsing through ADE20K dataset. In: CVPR (2017)Google Scholar
  57. 57.
    Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., Torralba, A.: Semantic understanding of scenes through the ADE20K dataset. IJCV 127(3), 302–321 (2018)CrossRefGoogle Scholar
  58. 58.
    Zhou, Z., Shin, J., Zhang, L., Gurudu, S., Gotway, M., Liang, J.: Fine-tuning convolutional neural networks for biomedical image analysis: actively and incrementally. In: CVPR (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Google ResearchZurichSwitzerland
  2. 2.RWTH Aachen UniversityAachenGermany

Personalised recommendations