Skip to main content
Log in

Real-time one-dimensional motion estimation and its application in computer vision

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Traditional optical flow methods are designed to generate two-dimensional (2D) motion fields. However in some applications, only the motion field along a predefined line is needed, while the rest of the redundant 2D region (Cong et al., CVPR, 2009) can be ignored. In this paper, we propose a general framework for estimating one-dimensional (1D) motion field using the \(L_1\) norm metric, which can be flexible for adding any constraints depending on the specific intention. Compared with 2D methods in model calculation, which often obtain a local optimal solution, our 1D version has the advantage of real-time global optimization and sub-pixel accuracy with a large displacement capacity. The synthesized experiments have justified the effectiveness of our 1D model. Two vision applications are investigated: (i) for crowd counting, we design a novel unified framework for both line of interest and region of interest counting using our 1D motion model to mosaic the dynamic blobs. Such a framework can adapt to new site without scene-specific learning and overcome overfitting. (ii) For visual odometry, we apply our 1D model to estimate the linear velocity of the robot in real time and obtain a comparable result with the benchmark Encoder. Experiments verified that our proposed model can produce satisfactory results in real time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Adam, A., Rivlin, E., Shimshoni, I., Reinitz, D.: Robust real-time unusual event detection using multiple fixed-location monitors. IEEE Trans. Pattern Anal. Mach. Intell. 30(3), 555–560 (2008)

    Article  Google Scholar 

  2. Albiol, A., Mora, I., Naranjo, V.: Real time high density people counter using morphological tools. IEEE Trans. Intell. Transp. Syst. 2(4), 4652–4652 (2001)

    Article  Google Scholar 

  3. Alvar, M., Torsello, A., Sanchez-Miralles, A., et al.: Abnormal behavior detection using dominant sets. Mach. Vis. Appl. 25(5), 1351–1368 (2014)

    Article  Google Scholar 

  4. Amini, A., Weymouth, T., Jain, R.: Using dynamic programming for solving variational problems in vision. Pattern Anal. Mach. Intell. IEEE Trans. 12(9), 855–867 (1990)

    Article  Google Scholar 

  5. Anandan, P.: A computational framework and an algorithm for the measurement of visual motion. Int. J. Comput. Vis. 2, 283–310 (1989)

    Article  Google Scholar 

  6. Antonini, G., Thiran, J.P.: Counting pedestrians in video sequences using trajectory clustering. IEEE Trans. Circuits Syst. Video Technol. 16(8), 1008–1020 (2006)

    Article  Google Scholar 

  7. Baker, S., Matthews, I.: Lucas-kanade 20 years on: a unifying framework. Int. J. Comput. Vis. 56(3), 221–255 (2004)

    Article  Google Scholar 

  8. Barfoot, T.D.: Online visual motion estimation using fastslam with sift features. In: International Conference on Intelligent Robots and Systems, pp. 579–585 (2005)

  9. Barron, J., Fleet, D., Beauchemin, S.: Performance of optical flow techniques. Int. J. Comput. Vis. 12(1), 43–77 (1994)

    Article  Google Scholar 

  10. Biilthoff, H., Little, J., Poggio, T.: A parallel algorithm for real-time computation of optical flow. Nature 337(6207), 549–553 (1989)

    Article  Google Scholar 

  11. Brostow, G.J., Cipolla, R.: Unsupervised bayesian detection of independent motion in crowds. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 594–601 (2006)

  12. Brox, T., Bruhn, A., Papenberg, N., et al.: High accuracy optical flow estimation based on a theory for warping. In: Computer Vision–ECCV, pp. 25–36. Springer, Berlin, Heidelberg (2004)

  13. Brox, T., Bregler, C., Malik, J.: Large displacement optical flow. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 41–48 (2009)

  14. Burt, P., Yen, C., Xu, X.: Multi-resolution flow-through motion analysis. In: IEEE Computer Vision and Pattern Recognition, pp. 246 (1983)

  15. Caballero, F., Merino, L., Ferruz, J., Ollero, A.: Vision-based odometry and SLAM for medium and high altitude flying UAVs. J. Intell. Robotic Syst. 54(1), 137–161 (2009)

    Article  Google Scholar 

  16. Campbell, J., Sukthankar, R., Nourbakhsh, I., et al.: A robust visual odometry and precipice detection system using consumer-grade monocular vision. In: International Conference on Robotics and Automation, pp. 3421–3427 (2005)

  17. Canny, J.: A computational approach to edge detection. Pattern Anal. Mach. Intell. IEEE Trans. 6, 679–698 (1986)

    Article  Google Scholar 

  18. Chan, A.B., Vasconcelos, N.: Counting people with low-level features and Bayesian regression. Image Process. IEEE Trans. 21(4), 2160–2177 (2012)

    Article  MathSciNet  Google Scholar 

  19. Chan, A.B., Liang, Z.S.J., Vasconcelos, N.: Privacy preserving crowd monitoring: Counting people without people models or tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–7 (2008)

  20. Chen, W., Mied, R.P.: Optical flow estimation for motion-compensated compression. Image Vis. Comput. 31(3), 275–289 (2013)

  21. Chen, T., Chen, T., Chen, Z.: An intelligent people-flow counting method for passing through a gate. In: IEEE Conference on Robotics, Automation and Mechatronics (2006)

  22. Chiuso, A., Favaro, P., Jin, H., et al.: Structure from motion causally integrated over time. IEEE Trans. Pattern Anal. Mach. Intell. 24(4), 523–535 (2004)

    Article  Google Scholar 

  23. Cho, S., Chow, T., Leung, C.: A neural-based crowd estimation by hybrid global learning algorithm. IEEE Trans. Syst. Man Cybern. Part B 29(4), 535–541 (1999)

  24. Cong, Y., Gong, H., Zhu, S.C., et al.: Flow mosaicking: Real-time pedestrian counting without scene-specific learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1093–1100 (2009)

  25. Conte, D., Foggia, P., Percannella, G., Vento, M.: Counting moving persons in crowded scenes. Mach. Vis. Appl. 24(5), 1029–1042 (2013)

    Article  Google Scholar 

  26. Dalal, N., Triggs, B., Schmid, C.: Human detection using oriented histograms of flow and appearance. In: Computer Vision – ECCV, pp. 428–441. Springer, Berlin, Heidelberg (2006)

  27. Davies, A., Yin, J., Velastin, S., et al.: Crowd monitoring using image processing. Electron. Commun. Eng. J. 7(1), 37–47 (1995)

  28. Díaz, J., Ros, E., Pelayo, F., Ortigosa, E., Mota, S.: FPGA-based real-time optical-flow system. IEEE Trans. Circuits Syst. Video Technol. 16(2), 274–279 (2006)

    Article  Google Scholar 

  29. Dong, L., Parameswaran, V., Ramesh, V., et al.: Fast crowd segmentation using shape indexing. In: IEEE 11th International Conference on Computer Vision, pp. 1–8 (2007)

  30. Efros, A., Berg, A., Mori, G., Malik, J.: Recognizing action at a distance. ICCV 2, 726–733 (2003)

    Google Scholar 

  31. Felzenszwalb, P.F., Zabih, R.: Dynamic programming and graph algorithms in computer vision. Pattern Anal. Mach. Intell. IEEE Trans. 33(4), 721–740 (2011)

    Article  Google Scholar 

  32. Fleet, D., Jepson, A.: Computation of component image velocity from local phase information. Int. J. Comput. Vis. 5, 77–104 (1990)

  33. García-Martín, Á., Martínez, J.M.: On collaborative people detection and tracking in complex scenarios. Image Vis. Comput. 30(4), 345–354 (2012)

    Article  Google Scholar 

  34. Gong, H., Pan, C., Yang, Q., Lu, H., Ma, S.: Generalized optical flow in the scale space. Comput. Vis. Image Underst. 105(1), 86–92 (2007)

    Article  Google Scholar 

  35. Heeger, D.: Optical flow using spatiotemporal filters. Int. J. Comput. Vis. 1(4), 279–302 (1988)

    Article  Google Scholar 

  36. Heikkilä, M., Pietikäinen, M.: A texture-based method for modeling the background and detecting moving objects. IEEE Trans. Pattern Anal. Mach. Intell., pp. 657–662 (2006)

  37. Horn, B., Schunck, B.: Determining optical flow. In: AI 17(1–3), 185–203 (1981)

    Google Scholar 

  38. Irani, M., Anandan, P.: A unified approach to moving object detection in 2d and 3d scenes. Pattern Anal. Mach. Intell. IEEE Trans. 20(6), 577–589 (1998)

    Article  Google Scholar 

  39. Kong, D., Gray, D., Tao, H.: A viewpoint invariant approach for crowd counting. In: 18th International Conference on Pattern Recognition, pp. 1187–1190 (2006)

  40. Kumar, A., Tannenbaum, A., Balas, G.: Optical flow: a curve evolution approach. IEEE Trans. Image Process. 5(4), 598–610 (1996)

    Article  Google Scholar 

  41. Leibe, B., Schindler, K., Van Gool, L.: Coupled detection and trajectory estimation for multi-object tracking. In: IEEE 11th International Conference on Computer Vision, pp. 1–8 (2007)

  42. Lempitsky, V., Roth, S., Rother, C.: Fusionflow: Discrete-continuous optimization for optical flow estimation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)

  43. Lin, S.F., Chen, J.Y., Chao, H.X.: Estimation of number of people in crowded scenes using perspective transformation. IEEE Trans. Sys. Man Cybern. Part A Sys. Humans 31(6), 645–654 (2001)

  44. Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: IJCAI, vol. 3, pp. 3 (1981)

  45. Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: IJCAI, pp. 674–679 (1981)

  46. Lv, F., Zhao, T., Nevatia, R.: Self-calibration of a camera from video of a walking human. In: 16th International Conference on Pattern Recognition, pp. 562–567 (2002)

  47. Ma, Z., Chan, A.B.: Crossing the line: Crowd counting by integer programming with local features. In: Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 2539–2546. IEEE (2013)

  48. Matthies, L., Maimone, M., Johnson, A., Cheng, Y., Willson, R., Villalpando, C., Goldberg, S., Huertas, A., Stein, A., Angelova, A.: Computer vision on Mars. Int. J. Comput. Vis. 75(1), 67–92 (2007)

    Article  Google Scholar 

  49. Metaxas, D., Zhang, S.: A review of motion analysis methods for human nonverbal communication computing. Image Vis. Comput. 31, 421–433 (2013)

    Article  Google Scholar 

  50. Nagel, H.: On the estimation of optical flow: relations between different approaches and some new results. AI 33, 299–324 (1987)

    Google Scholar 

  51. Nistér, D., Naroditsky, O., Bergen, J.: Visual odometry. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 1–652 (2004)

  52. Paragios, N., Ramesh, V.: A MRF-based approach for real-time subway monitoring. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition 1, 1–1034 (2001)

  53. Pauwels, K., Van Hulle, M.M.: Realtime phase-based optical flow on the GPU. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–8 (2008)

  54. Rabaud, V., Belongie, S.: Counting crowded moving objects. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 705–711 (2006)

  55. Roberts, R., Nguyen, H., Krishnamurthi, N., et al.: Memory-based learning for visual odometry. In: IEEE International Conference on Robotics and Automation, pp. 47–52 (2008)

  56. Roth, S., Black, M.: On the spatial statistics of optical flow. Int. J. Comput. Vis. 74(1), 33–50 (2007)

    Article  Google Scholar 

  57. Scaramuzza, D., Siegwart, R.: Appearance-guided monocular omnidirectional visual odometry for outdoor ground vehicles. IEEE Trans. Robotics Spec. Issue Vis. SLAM 24(5), 1015–1026 (2008)

  58. Shi, J., Malik, J.: Motion segmentation and tracking using normalized cuts. In: Sixth International Conference on Computer Vision, pp. 1154–1160 (1997)

  59. Singh, A.: An estimation-theoretic framework for image-flow computation. In: Third International Conference on Computer Vision, pp. 168–177 (1990)

  60. Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, Fort Collins (1999)

  61. Sun, D., Roth, S., Lewis, J.P., et al.: Learning optical flow. In: Computer Vision - ECCV. Springer, Berlin, Heidelberg, pp. 83–97 (2008)

  62. Sun, D., Roth, S., Black, M.J.: Secrets of optical flow estimation and their principles. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2432–2439 (2010)

  63. Sun, D., Roth, S., Black, M.J.: A quantitative analysis of current practices in optical flow estimation and the principles behind them. Int. J. Comput. Vis. 106(2), 115–137 (2014)

    Article  Google Scholar 

  64. Viola, P., Jones, M., Snow, D.: Detecting pedestrians using patterns of motion and appearance. IJCV 63(2), 734–741 (2005)

    Article  Google Scholar 

  65. Volz, S., Bruhn, A., Valgaerts, L., et al.: Modeling temporal coherence for optical flow. In: IEEE International Conference on Computer Vision, pp. 1116–1123 (2011)

  66. Weinland, D., Ronfard, R., Boyer, E.: A survey of vision-based methods for action representation, segmentation and recognition. Comput. Vis. Image Underst. 115(2), 224–241 (2011)

    Article  Google Scholar 

  67. Weiss, Y., Adelson, E.H.: Slow and smooth: A Bayesian theory for the combination of local motion signals in human vision. In: Center for Biological and Computational Learning Paper, vol. 158 (1998)

  68. Wu, B., Nevatia, R.: Detection of multiple, partially occluded humans in a single image by bayesian combination of edgelet part detectors. In: IEEE International Conference on Computer Vision, pp. 90–97 (2005)

  69. Wu, S., He, X., Lu, H., Yuille, A.L.: A unified model of short-range and long-range motion perception. In: Advances in Neural Information Processing Systems, pp. 2478–2486 (2010)

  70. Xu, L., Jia, J., Matsushita, Y.: Motion detail preserving optical flow estimation. Pattern Anal. Mach. Intell. IEEE Trans. 34(9), 1744–1757 (2012)

    Article  Google Scholar 

  71. Yao, B., Yang, X., Zhu, S.C.: Introduction to a large-scale general purpose ground truth database: methodology, annotation tool and benchmarks. In: 6th Int’l Conf on EMMCVPR (2007)

  72. Zach, C., Pock, T., Bischof, H.: A duality based approach for realtime TV-L 1 optical flow. In: Pattern Recognition, pp. 214–223. Springer, Berlin, Heidelberg (2007)

  73. Zhan, B., Monekosso, D.N., Remagnino, P., Velastin, S.A., Xu, L.Q.: Crowd analysis: a survey. Mach. Vis. Appl. 19(5–6), 345–357 (2008)

    Article  MATH  Google Scholar 

  74. Zhao, Y., Gong, H., Lin, L., et al.: Spatio-temporal patches for night background modeling by subspace learning. In: International Conference on Pattern Recognition, pp. 1–4 (2008)

Download references

Acknowledgments

This work was supported by NSFC (61375014) and also the Foundation of Chinese Scholarship Council.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Cong.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cong, Y., Gong, H., Tang, Y. et al. Real-time one-dimensional motion estimation and its application in computer vision. Machine Vision and Applications 26, 633–648 (2015). https://doi.org/10.1007/s00138-015-0688-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-015-0688-8

Keywords

Navigation