Advertisement

Active Scene Classification via Dynamically Learning Prototypical Views

  • Zachary A. DanielsEmail author
  • Dimitris N. MetaxasEmail author
Conference paper
  • 46 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12312)

Abstract

Scene classification is an important computer vision problem with applications to a wide range of domains including remote sensing, robotics, autonomous driving, defense, and surveillance. However, many approaches to scene classification make simplifying assumptions about the data, and many algorithms for scene classification are ill-suited for real-world use cases. Specifically, scene classification algorithms generally assume that the input data consists of single views that are extremely representative of a limited set of known scene categories. In real-world applications, such perfect data is rarely encountered. In this paper, we propose an approach for active scene classification where an agent must assign a label to the scene with high confidence while minimizing the number of sensor adjustments, and the agent is also embedded with the capability to dynamically update its underlying machine learning models. Specifically, we employ the Dynamic Data-Driven Applications Systems paradigm: our machine learning model drives the sensor manipulation, and the data captured by the manipulated sensor is used to update the machine learning model in a feedback control loop. Our approach is based on learning to identify prototypical views of scenes in a streaming setting.

Keywords

Computer vision Scene classification Prototype learning Active vision Active learning Dynamic data driven applications systems 

References

  1. 1.
    Aloimonos, J., Weiss, I., Bandyopadhyay, A.: Active vision. IJCV 1(4), 333–356 (1988)CrossRefGoogle Scholar
  2. 2.
    Bajcsy, R.: Active perception. Proc. IEEE 76(8), 966–1005 (1988)CrossRefGoogle Scholar
  3. 3.
    Bajcsy, R., Aloimonos, Y., Tsotsos, J.K.: Revisiting active perception. Autonomous Robots 42(2), 177–196 (2018)CrossRefGoogle Scholar
  4. 4.
    Ballard, D.H.: Reference frames for animate vision. In: IJCAI, vol. 89 (1989)Google Scholar
  5. 5.
    Bappy, J.H., Paul, S., Roy-Chowdhury, A.K.: Online adaptation for joint scene and object classification. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 227–243. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46484-8_14CrossRefGoogle Scholar
  6. 6.
    Binney, J., Krause, A., Sukhatme, G.S.: Informative path planning for an autonomous underwater vehicle. In: ICRA (2010)Google Scholar
  7. 7.
    Binney, J., Krause, A., Sukhatme, G.S.: Optimizing waypoints for monitoring spatiotemporal phenomena. In: IJRR (2013)Google Scholar
  8. 8.
    Blasch, E., Seetharaman, G., Darema, F.: Dynamic data driven applications systems (DDDAS) modeling for automatic target recognition. In: Automatic Target Recognition XXIII, vol. 8744, p. 87440J. SPIE (2013)Google Scholar
  9. 9.
    Blasch, E.P., Aved, A.J.: Dynamic data-driven application system (dddas) for video surveillance user support. Procedia Comput. Sci. 51, 2503–2517 (2015)CrossRefGoogle Scholar
  10. 10.
    Brown, C.: Prediction and cooperation in gaze control. Bio. cybernetics (1990)Google Scholar
  11. 11.
    Caicedo, J.C., Lazebnik, S.: Active object localization with deep reinforcement learning. In: ICCV, pp. 2488–2496 (2015)Google Scholar
  12. 12.
    Charrow, B.: Information-theoretic active perception for multi-robot teams (2015)Google Scholar
  13. 13.
    Chen, S., Li, Y., Kwok, N.M.: Active vision in robotic systems: a survey of recent developments. IJRR 30(11), 1343–1377 (2011)Google Scholar
  14. 14.
    Chen, X.S., He, H., Davis, L.S.: Object detection in 20 questions. In: WACV (2016)Google Scholar
  15. 15.
    Coombs, D.J., Brown, C.M.: Intelligent gaze control in binocular vision. In: ISIC. pp. 239–245. IEEE (1990)Google Scholar
  16. 16.
    Darema, F.: Dynamic data driven applications systems: a new paradigm for application simulations and measurements. In: Bubak, M., van Albada, G.D., Sloot, P.M.A., Dongarra, J. (eds.) ICCS 2004. LNCS, vol. 3038, pp. 662–669. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-24688-6_86CrossRefGoogle Scholar
  17. 17.
    Decaestecker, C.: Finding prototypes for nearest neighbour classification by means of gradient descent and deterministic annealing. Pattern Recogn. 30, 281–288 (1997)CrossRefGoogle Scholar
  18. 18.
    Denham, M., Wendt, K., Bianchini, G., Cortés, A., Margalef, T.: Dynamic data-driven genetic algorithm for forest fire spread prediction. J. Computat. Sci. 3(5), 398–404 (2012)CrossRefGoogle Scholar
  19. 19.
    Douglas, C.C., et al.: DDDAS approaches to wildland fire modeling and contaminant tracking. In: Proceedings of the 2006 Winter Simulation Conference, pp. 2117–2124. IEEE (2006)Google Scholar
  20. 20.
  21. 21.
    Garcia, A., Vezhnevets, A., Ferrari, V.: An active search strategy for efficient object detection. In: CVPR. pp. 3022–3031 (2015)Google Scholar
  22. 22.
    Geva, S., Sitte, J.: Adaptive nearest neighbor pattern classific. In: IEEE TNN (1991)Google Scholar
  23. 23.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)Google Scholar
  24. 24.
    Huang, Y.S., et al.: A simulated annealing approach to construct optimized prototypes for nearest-neighbor classification. In: ICPR, vol. 4, pp. 483–487. IEEE (1996)Google Scholar
  25. 25.
    Jayaraman, D., Grauman, K.: Learning to look around: Intelligently exploring unseen environments for unknown tasks. In: CVPR, pp. 1238–1247 (2018)Google Scholar
  26. 26.
    Johns, E., Leutenegger, S., Davison, A.J.: Pairwise decomposition of image sequences for active multi-view recognition. In: CVPR, pp. 3813–3822 (2016)Google Scholar
  27. 27.
    Kohonen, T.: Improved versions of learning vector quantization. In: IJCNN (1990)Google Scholar
  28. 28.
    Kohonen, T.: The self-organizing map. In: Proceedings of the IEEE (1990)Google Scholar
  29. 29.
    Li, X., Guo, R., Cheng, J.: Incorporating incremental and active learning for scene classification. In: ICMLA, vol. 1, pp. 256–261. IEEE (2012)Google Scholar
  30. 30.
    Li, X., Guo, Y.: Adaptive active learning for image classification. In: CVPR (2013)Google Scholar
  31. 31.
    Li, X., Guo, Y.: Multi-level adaptive active learning for scene classification. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8695, pp. 234–249. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10584-0_16CrossRefGoogle Scholar
  32. 32.
    Lindley, D.V.: On a measure of the information provided by an experiment. Ann. Math. Stat. 27, 986–1005 (1956)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Liu, C.L., Nakagawa, M.: Evaluation of prototype learning algorithms for nearest-neighbor classifier in application to handwritten character recognition. Pattern Recogn. 34(3), 601–615 (2001)CrossRefGoogle Scholar
  34. 34.
    Ma, K.C., Liu, L., Sukhatme, G.S.: Informative planning and online learning with sparse gaussian processes. In: ICRA (2017)Google Scholar
  35. 35.
    MacKay, D.J.: Information-based objective functions for active data selection. Neural Comput. 4, 590–604 (1992)CrossRefGoogle Scholar
  36. 36.
    MacQueen, J., et al.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–297. Oakland, CA, USA (1967)Google Scholar
  37. 37.
    Mathe, S., Pirinen, A., Sminchisescu, C.: Reinforcement learning for visual object detection. In: CVPR, pp. 2894–2902 (2016)Google Scholar
  38. 38.
    Paul, S., Bappy, J.H., Roy-Chowdhury, A.K.: Efficient selection of informative and diverse training samples with applications in scene classification. In: ICIP, pp. 494–498. IEEE (2016)Google Scholar
  39. 39.
    Reineking, T., Schult, N., Hois, J.: Evidential combination of ontological and statistical information for active scene classification. In: KEOD, pp. 72–79 (2009)Google Scholar
  40. 40.
    Sato, A., Yamada, K.: Generalized learning vector quantization. In: NeurIPS (1996)Google Scholar
  41. 41.
    Sato, A., Yamada, K.: A formulation of learning vector quantization using a new misclassification measure. In: ICPR, vol. 1, pp. 322–325. IEEE (1998)Google Scholar
  42. 42.
    Settles, B.: Active learning literature survey. Tech. rep., University of Wisconsin-Madison Department of Computer Sciences (2009)Google Scholar
  43. 43.
    Sommerlade, E., Reid, I.: Information-theoretic active scene exploration. In: CVPR, pp. 1–7. IEEE (2008)Google Scholar
  44. 44.
    Wilkes, D., Tsotsos, J.K.: Active object recognition. In: CVPR, IEEE (1992)Google Scholar
  45. 45.
    Wixson, L.: Viewpoint selection for visual search. In: CVPR (1994)Google Scholar
  46. 46.
    Xiao, J., Ehinger, K.A., Oliva, A., Torralba, A.: Recognizing scene viewpoint using panoramic place representation. In: CVPR. pp. 2695–2702. IEEE (2012)Google Scholar
  47. 47.
    Yang, H.M., Zhang, X.Y., Yin, F., Liu, C.L.: Robust classification with convolutional prototype learning. In: CVPR, pp. 3474–3482 (2018)Google Scholar
  48. 48.
    Yu, X., Fermüller, C., Teo, C.L., Yang, Y., Aloimonos, Y.: Active scene recognition with vision and language. In: ICCV, pp. 810–817. IEEE (2011)Google Scholar
  49. 49.
    Zheng, C., Yi, Y., Qi, M., Liu, F., Bi, C., Wang, J., Kong, J.: Multicriteria-based active discriminative dictionary learning for scene recognition. IEEE Access (2017)Google Scholar
  50. 50.
    Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE TPAMI 40(6), 1452–1464 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Rutgers UniversityPiscatawayUSA

Personalised recommendations