On the Use of Optical Flow for Scene Change Detection and Description

  • Navid Nourani-Vatani
  • Paulo Vinicius Koerich Borges
  • Jonathan M. Roberts
  • Mandyam V. Srinivasan


We propose the use of optical flow information as a method for detecting and describing changes in the environment, from the perspective of a mobile camera. We analyze the characteristics of the optical flow signal and demonstrate how robust flow vectors can be generated and used for the detection of depth discontinuities and appearance changes at key locations. To successfully achieve this task, a full discussion on camera positioning, distortion compensation, noise filtering, and parameter estimation is presented. We then extract statistical attributes from the flow signal to describe the location of the scene changes. We also employ clustering and dominant shape of vectors to increase the descriptiveness. Once a database of nodes (where a node is a detected scene change) and their corresponding flow features is created, matching can be performed whenever nodes are encountered, such that topological localization can be achieved. We retrieve the most likely node according to the Mahalanobis and Chi-square distances between the current frame and the database. The results illustrate the applicability of the technique for detecting and describing scene changes in diverse lighting conditions, considering indoor and outdoor environments and different robot platforms.


Scene change detection Optical flow descriptor Mapping and localization Computer vision Mobile robots 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Angeli, A., Filliat, D., Doncieux, S., Meyer, J.: Fast and incremental method for loop-closure detection using bags of visual words. IEEE Trans. Robot. 24(5), 1027–1037 (2008)CrossRefGoogle Scholar
  2. 2.
    Barron, J.L., Fleet, D.J., Beauchemin, S.S.: Performance of optical flow techniques. Int. J. Comput. Vis. 12, 43–77 (1994)CrossRefGoogle Scholar
  3. 3.
    Bauer, A., Klasing, K., Lidoris, G., Mühlbauer, Q., Rohrmüller, F., Sosnowski, S., Xu, T., Kühnlenz, K., Wollherr, D., Buss, M.: The autonomous city explorer: towards natural human-robot interaction in urban environments. Int. J. Soc. Robot. 1(2), 127–140 (2009)CrossRefGoogle Scholar
  4. 4.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Conference on Computer Vision, pp. 404–417 (2006)Google Scholar
  5. 5.
    Beeson, P., Modayil, J., Kuipers, B.: Factoring the mapping problem: Mobile robot map-building in the hybrid spatial semantic hierarchy. Int. J. Robot. Res. 29(4), 428–459 (2010)CrossRefGoogle Scholar
  6. 6.
    Beeson, P., Jong, N.K., Kuipers, B.: Towards autonomous topological place detection using the extended Voronoi graph. In: International Conference on Robotics and Automation, pp. 4373–4379 (2005)Google Scholar
  7. 7.
    Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 24(4), 509–522 (2002)CrossRefGoogle Scholar
  8. 8.
    Biernacki, C., Celeux, G., Govaert, G.: Assessing a mixture model for clustering with the integrated completed likelihood. IEEE Trans. Pattern Anal. Mach. Intell. 22(7), 719–725 (2000)CrossRefGoogle Scholar
  9. 9.
    Blaer, P., Allen, P.: Topological mobile robot localization using fast vision techniques. In: IEEE International Conference on Robotics and Automation, vol. 1, pp. 1031–1036 (2002)Google Scholar
  10. 10.
    Bouguet, J., et al.: Pyramidal Implementation of the Lucas–Kanade Feature Tracker Description of the Algorithm, vol. 3. Intel Corporation, Microprocessor Research Labs, OpenCV Documents (1999)Google Scholar
  11. 11.
    Bradley, D., Patel, R., Vandapel, N., Thayer, S.: Realtime image-based topological localization in large outdoor environments. In: IEEE/RSJ Internation Conference on Intelligent Robots and Systems, pp. 3670–3677. IEEE (2005)Google Scholar
  12. 12.
    Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986)CrossRefGoogle Scholar
  13. 13.
    Chernoff, H., Lehmann, E.: The Use of Maximum Likelihood Estimates in χ2 Tests for Goodness of Fit. The Annals of Mathematical Statistics, pp. 579–586 (1954)Google Scholar
  14. 14.
    Choset, H., Burdick, J.: Sensor-based exploration: the hierarchical generalized Voronoi graph. Int. J. Robot. Res. 19(2), 96 (2000)CrossRefGoogle Scholar
  15. 15.
    Choset, H., Nagatani, K.: Topological simultaneous localization and mapping (SLAM): toward exact localization without explicit localization. IEEE Trans. Robot. Autom. 17(2), 125–137 (2001)CrossRefGoogle Scholar
  16. 16.
    Cotsaces, C., Nikolaidis, N., Pitas, I.: Video shot detection and condensed representation. A review. IEEE Signal Process. Mag. 23(2), 28–37 (2006)CrossRefGoogle Scholar
  17. 17.
    Cummins, M., Newman, P.: FAB-MAP: probabilistic localization and mapping in the space of appearance. Int. J. Robot. Res. 27(6), 647–665 (2008)CrossRefGoogle Scholar
  18. 18.
    Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007)CrossRefGoogle Scholar
  19. 19.
    Dittmar, L., Sturzl, W., Baird, E., Boeddeker, N., Egelhaaf, M.: Goal seeking in honeybees: matching of optic flow snapshots? J. Exp. Biol. 213(17), 2913–2923 (2010)CrossRefGoogle Scholar
  20. 20.
    Duenne, M., Mayer, H., Nourani-Vatani, N.: Floor Cleaning Apparatus and Method of Control therefore. European Patent Office, no. EP1557730 (2005)Google Scholar
  21. 21.
    Duff, E., Roberts, J., Corke, P.: Automation of an underground mining vehicle using reactive navigation and opportunistic localization. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 4, pp. 3775–3780 (2003)Google Scholar
  22. 22.
    Fawcett, T.: ROC graphs: notes and practical considerations for researchers. Mach. Learn. 31(HPL-2003-4), 1–38 (2004)MathSciNetGoogle Scholar
  23. 23.
    Galvin, B., McCane, B., Novins, K., Mason, D., Mills, S.: Recovering motion fields: an evaluation of eight optical flow algorithms. In: British Machine Vision Conference, vol. 98, pp. 195–204 (1998)Google Scholar
  24. 24.
    Giachetti, A., Campani, M., Torre, V.: The use of optical flow for road navigation. IEEE Trans. Robot. Autom. 14(1), 34–48 (1998)CrossRefGoogle Scholar
  25. 25.
    Goedemé, T., Nuttin, M., Tuytelaars, T., Van Gool, L.: Omnidirectional vision based topological navigation. Int. J. Comput. Vis. 74, 219–236 (2007)CrossRefGoogle Scholar
  26. 26.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Alvey Vision Conference, vol. 15, p. 50. Manchester, UK (1988)Google Scholar
  27. 27.
    Horn, B.K.P.: Robot Vision, Ser. MIT Electrical Engineering and Computer Science Series. The MIT Press (1986)Google Scholar
  28. 28.
    Hrabar, S., Sukhatme, G.: Optimum camera angle for optic flow-based centering response. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3922–3927 (2006)Google Scholar
  29. 29.
    Juan, L., Gwun, O.: A comparison of sift, pca-sift and surf. Int. J. Image Process. 3(4), 143–152 (2009)Google Scholar
  30. 30.
    Kortenkamp, D., Baker, L., Weymouth, T.: Using gateways to build a route map. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 3, pp. 2209–2214 (1992)Google Scholar
  31. 31.
    Kuipers, B., Byun, Y.: A robot exploration and mapping strategy based on a semantic hierarchy of spatial representations. In: Toward Learning Robots, pp. 47–63. MIT Press, Cambridge, MA (1993)Google Scholar
  32. 32.
    Leutenegger, S., Chli, M., Siegwart, R.Y.: Brisk: Binary robust invariant scalable keypoints. In: IEEE International Conference on Computer Vision (ICCV), pp. 2548–2555 (2011)Google Scholar
  33. 33.
    Lowe, D.: Object recognition from local scale-invariant features. In: International Conference on Computer Vision, vol. 2, pp. 1150–1157. Kerkyra, Greece (1999)Google Scholar
  34. 34.
    Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: International Joint Conference on Artificial Intelligence, vol. 3, pp. 674–679 (1981)Google Scholar
  35. 35.
    Marinakis, D., Dudek, G.: Pure topological mapping in mobile robotics. IEEE Trans. Robot. 26(6), 1051–1064 (2010)CrossRefGoogle Scholar
  36. 36.
    McHugh, S.: Noise reduction by image averaging. http://www.cambridgeincolour.com/tutorials/image-averaging-noise.htm. Accessed 15 May 2013
  37. 37.
    Munkres, J.: Algorithms for the assignment and transportation problems. J. Soc. Ind. Appl. Math. 5(1), 32–38 (1957)CrossRefMATHMathSciNetGoogle Scholar
  38. 38.
    Nourani-Vatani, N., Borges, P., Roberts, J., Srinivasan, M.: Topological localization using optical flow descriptors. In: IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1030–1037 (2011)Google Scholar
  39. 39.
    Nourani-Vatani, N., Borges, P., Roberts, J.: A study of feature extraction algorithms for optical flow tracking. In: Australasian Conference on Robotics and Automation (2012)Google Scholar
  40. 40.
    Nourani-Vatani, N.: On the use of optical flow for scene change detection and description in outdoor lightingvariant environments. Ph.D. dissertation, The University of Queensland, St Lucia, Queensland, Australia (2011)Google Scholar
  41. 41.
    Nourani-Vatani, N., Borges, P.V.K.: Correlation-based visual odometry for car-like vehicles. J. Field Robot. 28(5), 742–768 (2011)CrossRefMATHGoogle Scholar
  42. 42.
    Nourani-Vatani, N., Pradalier, C.: Scene change detection for vision-based topological mapping and localization. In: IEEE/RSJ International Conference on Robots and Systems, pp. 3792–3797 (2010)Google Scholar
  43. 43.
    Nuske, S., Roberts, J., Wyeth, G.: Robust visual localisation for industrial vehicles in dynamic and non-uniform outdoor lighting. J. Field Robot. 26(9), 728–756 (2009)CrossRefGoogle Scholar
  44. 44.
    Proakis, J.G., Manolakis, D.G.: Digital Signal Processing. Prentice-Hall (1996)Google Scholar
  45. 45.
    Radhakrishnan, D., Nourbakhsh, I., Topological robot localization by training a vision-based transition detector. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 1, pp. 468–473 (1999)Google Scholar
  46. 46.
    Ranganathan, A., Dellaert, F.: Bayesian surprise and landmark detection. In: IEEE international conference on Robotics and Automation, pp. 1240–1246 (2009)Google Scholar
  47. 47.
    Ranganathan, A., Menegatti, E., Dellaert, F.: Bayesian inference in the space of topological maps. IEEE Trans. Robot. 22(1), 92–107 (2006)CrossRefGoogle Scholar
  48. 48.
    Siagian, C., Itti, L.: Biologically-inspired robotics vision Monte Carlo localization in the outdoor environment. In: IEEE/RSJ Internation Conference on Intelligent Robots and Systems, pp. 1723–1730 (2007)Google Scholar
  49. 49.
    Srinivasan, M.V., Zhang, S., Lehrer, M., Collet, T.S.: Honeybee navigation en route to the goal: visual flight control and odometry. J. Exp. Biol. 199, 237–244 (1996)Google Scholar
  50. 50.
    Srinivasan, M.V., Zhang, S., Altwein, M., Tautz, J.: Honeybee navigation: nature and calibration of the “odometer”. Science 287, 851–853 (2000)CrossRefGoogle Scholar
  51. 51.
    Ulrich, I., Nourbakhsh, I.: Appearance-based place recognition for topological localization. In: IEEE International Conference on Robotics and Automation, vol. 2, pp. 1023–1029 (2000) Google Scholar
  52. 52.
    Valgren, C., Lilienthal, A.J.: Sift, surf & seasons: appearance-based long-term localization in outdoor environments. Robot. Auton. Syst. 58(2), 149–156 (2010)CrossRefGoogle Scholar
  53. 53.
    Van De Weijer, J., Schmid, C.: Coloring local feature extraction. In: European Conference on Computer Vision, pp. 334–348 (2006)Google Scholar
  54. 54.
    Weibel, Y.: Optic-flow-based tracking of moving objects from a moving viewpoint. Master’s thesis, Ecole Polytechnique Federale de Lausanne (2008)Google Scholar
  55. 55.
    Werner, F., Maire, F., Sitte, J.: Topological SLAM using fast vision techniques. In: Proceedings of the FIRA RoboWorld Congress 2009 on Advances in Robotics, p. 196. Springer (2009)Google Scholar
  56. 56.
    Wu, C.: SiftGPU: a GPU implementation of scale invariant feature transform (SIFT). http://cs.unc.edu/ccwu/siftgpu/ (2007). Accessed 15 May 2013

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  • Navid Nourani-Vatani
    • 1
  • Paulo Vinicius Koerich Borges
    • 2
  • Jonathan M. Roberts
    • 2
  • Mandyam V. Srinivasan
    • 3
  1. 1.The Australian Centre for Field RoboticsUniversity of SydneyDarlingtonAustralia
  2. 2.ICT CentreCSIROPullenvaleAustralia
  3. 3.University of QueenslandSaint LuciaAustralia

Personalised recommendations