Skip to main content
Log in

Extracting motion data from video using optical flow with physically-based constraints

  • Regular Paper
  • Control Applications
  • Published:
International Journal of Control, Automation and Systems Aims and scope Submit manuscript

Abstract

Motion analysis of video data is a powerful tool for studying dynamic behavior and determining sources of failures. In the case of failure analysis, the available video may be of poor quality, such as from surveillance cameras. It is also likely to have been taken from a bad angle, with poor lighting, and occlusions may be present. To address such cases, this paper presents an optical flow-based tracking algorithm incorporating physically-based constraints to extract motion data from video. The technique can accurately track a significant number of data points with a high degree of automation and efficiency. Many traditional methods of video data extraction from poor-quality video have proven tedious and time-consuming due to extensive user-input requirements. With this in mind, the proposed optical flow-based algorithm functions with a minimal degree of user involvement. Points identified at the outset of a video sequence, and within a small subset of frames spaced throughout, can be automatically tracked even when they become occluded or undergo translational, rotational, or deformational motion. The proposed algorithm improves upon previous optical flow-based tracking algorithms by providing greater flexibility and robustness. Example results are presented that show the method tracking machines with flexible components, Segway personal transporters, and athletes pole vaulting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. P. Anandan and M. J. Black, “The robust estimation of multiple motions: affine and piecewise-smooth flow fields,” Technical report P93-00104, Xerox Systems and Practices Laboratory, December 1993.

  2. S. Baker, D. Scharstein, J. Lewis, S. Roth, M. Black, and R. Szeliski, “A database and evaluation methodology for optical flow,” International Journal of Computer Vision, vol. 92, no. 1, pp. 1–31, March 2011.

    Article  Google Scholar 

  3. J. Barron and R. Klette, “Quantitative color optical flow,” Proc of the IEEE International Conference on Pattern Recognition, vol. 4, pp. 251–255, 2002.

    Google Scholar 

  4. J. Black and T. Ellis. “Multi camera image tracking,” Image and Vision Computing, vol. 24, no. 11, pp. 1256–1267, November 2006.

    Article  Google Scholar 

  5. B. P. Boden, P. Pasquina, J. Johnson, and F. O. Mueller, “Catastrophic injuries in pole-vaulters,” The American Journal of Sports Medicine, vol. 29, no. 1, pp. 50–54, January 2001.

    Google Scholar 

  6. A. Castro, W. Singhose, J. Potter, and C. Adams, “Modeling and experimental testing of a twowheeled inverted-pendulum transporter,” Proc. of the ASME Dynamic Systems and Control Conference, 2012.

  7. R. C. Cantu and F. O. Mueller, “Seventeenth annual report,” Technical report, National Center for Catastrophic Sport Injury Research, 2000.

  8. Z. Fan, Y. Wu, and M. Yang, “Multiple collaborative kernel tracking,” Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 502–509, 2005.

    Google Scholar 

  9. D. Frakes, L. Dasi, K. Pekkan, H. Kitajima, K. Sundareswaran, A. P. Yoganathan, and M. J. T. Smith, “A new method for registration-based medical image interpolation,” IEEE Trans. Med. Imaging., vol. 27, no. 3, pp. 370–377, January 2008.

    Article  Google Scholar 

  10. D. Frakes, T. Healy, S. Sharma, J. W. Monaco, M. J. T. Smith, and A. P. Yoganathan, “Application of an adaptive control grid interpolation technique to total cavopulmonary connection blood flow image reconstruction,” Proc. of the ASME Summer Bioengineering Conference, pp. 561–562, 2001.

  11. M. Gennert and S. Negahdaripour, “Relaxing the brightness constancy assumption in computing optical flow,” AI Memos, MIT Artificial Intelligence Laboratory, no. 975, 1987.

  12. F. Grasser, A. D’Arrigo, S. Colombi, and A. C. Rufer, “JOE: a mobile, inverted pendulum,” IEEE Trans. on Industrial Electronics, vol. 49, no. 1, pp.107–114, February 2002.

    Article  Google Scholar 

  13. Y.-S. Ha and S. Yuta, “Trajectory tracking control for navigation of the inverse pendulum type selfcontained mobile robot,” Robotics and Autonomous Systems, vol. 17, no. 1, pp. 65–80, April 1996.

    Article  Google Scholar 

  14. G. D. Hager, M. Dewan, and C. V. Stewart, “Multiple kernel tracking with SSD,” Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 790–797, 2004.

    Google Scholar 

  15. K. A. Hekman and W. E. Singhose, “A feedback control system for suppressing crane oscillation with on-off motors,” International Journal of Control, Automation, and Systems, vol. 5, no. 3, pp. 223–233, June 2007.

    Google Scholar 

  16. K. Hohne and W. Hanson, “Interactive 3D segmentation of MRI and CT volumes using morphological operations,” Journal of Computer Assisted Tomography, vol. 16, no. 2, pp. 285–294, March 1992.

    Article  Google Scholar 

  17. B. K. P. Horn, Robot Vision, the MIT Press, Cambridge, Massachusetts, 1968.

    Google Scholar 

  18. M. Isard and A. Blake, “Contour tracking by stochastic propagation of conditional density,” Proc. of the European Conference on Computer Vision, vol. 1064, pp, 343–56, 1996.

    Google Scholar 

  19. Y.-H. Kim and S.-Y. Yi, “Articulated body motion tracking using illumination invariant optical flow,” International Journal of Control, Automation, and Systems, vol. 8, no. 1, pp. 73–80, February 2010.

    Article  MathSciNet  Google Scholar 

  20. J. Lawrence and W. Singhose, “Design of minicrane for education and research,” Proc. of the 6th Int. Conference on Research and Education in Mechatronics, 2005.

  21. J. W. Monaco, Motion Models for Video Applications, Ph.D. thesis, Georgia Institute of Technology, Atlanta, Georgia, 1997.

    Google Scholar 

  22. J. W. Monaco and M. J. T. Smith, “Video coding using image warping within variable size blocks,” Proc. of the Int. Symposium on Circuits and Systems, pp. 794–797, 1996.

  23. J. B. Morrell and D. Field, “Design of a closed loop controller for a two wheeled balancing transporter,” Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 4059–4064, 2007.

  24. B. Pridgen, K. Bai, and W. Singhose, “Slosh suppression by robust input shaping,” Proc. of the IEEE Conf. on Decision and Control, pp. 2316–2321, 2010.

  25. H. Y. Shum and R. Szeliski, “Motion estimation with quadtree splines,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 18, no. 12, pp. 1199–1210, December 1996.

    Article  Google Scholar 

  26. J. Richards, “The measurement of human motion: a comparison of commercially available systems,” Human Movement Science, vol. 18, no. 5, pp. 580–602, October 1999.

    Article  Google Scholar 

  27. D. Sun, S. Roth, and M. Black, “Secrets of optical flow estimation and their principles,” Proc. of Computer Vision and Pattern Recognition, pp. 2432–2439, 2010.

  28. A. M. Tekalp, Digital Video Processing, Prentice-Hall, Upper Saddle River, NJ, 1995.

    Google Scholar 

  29. E. Trucco and K. Plakas, “Video tracking: a concise survey,” IEEE Journal of Oceanic Engineering, vol. 31, no. 2, pp. 520–529, April 2006.

    Article  Google Scholar 

  30. A. Yilmaz, X. Li, and M. Shah, “Contour-based object tracking with occlusion handling in video acquired using mobile cameras,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 26, no. 11, pp. 1531–1536, November 2004.

    Article  Google Scholar 

  31. C. Zwart, R. Pracht, and D. Frakes, “Improved motion estimation for restoring turbulence-distorted video,” Proc. of SPIE Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXIII, volume 8355, 2012.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Frakes.

Additional information

Recommended by Editorial Board member Myung Geun Chun under the direction of Editor Hyouk Ryeol Choi.

David Frakes received his Ph.D. from the Georgia Institute of Technology, USA, in 2003. He then co-founded 4-D Imaging, Inc., a small business that provides image and video processing solutions for the biomedical and military sectors. He later joined the faculty of the School of Biological and Health Systems Engineering and the School of Electrical, Computer, and Energy Engineering at Arizona State University. His research interests include image and video processing, fluid dynamics, and machine vision. Dr. Frakes has a pole vault pit in his back yard and his personal record in the event is 4.60m.

Christine Zwart received her M.S. from Arizona State University, USA, in 2011. She is currently pursuing a Ph.D. at ASU’s Harrington Department of Bioengineering with funding from the National Science Foundation Graduate Research Fellowship Program. Prior to receiving the NSF fellowship, Mrs. Zwart was funded by Science Foundation Arizona. Her research interests include biomedical and consumer image processing with emphasis on interpolation and evaluation. Mrs. Zwart has not pole vaulted (yet); however, her personal record for water ski jumping is 12m.

William Singhose received his Ph.D. from the Massachusetts Institute of Technology, USA, in 1997. He then joined the faculty of the Woodruff School of Mechanical Engineering at the Georgia Institute of Technology, USA. Dr. Singhose worked at Convolve, Inc. before getting his Ph.D. He developed and installed control systems on industrial machines such as silicon-handling robots, coordinate measuring machines, and high precision air bearing positioning stages. Dr. Singhose has held visiting appointments at the Tokyo Institute of Technology, the Polytechnic University of Madrid, and MIT. His research interests are dynamics, controls, active seats, and air traffic flow management. His personal record in the pole vault is 4.95m.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Frakes, D., Zwart, C. & Singhose, W. Extracting motion data from video using optical flow with physically-based constraints. Int. J. Control Autom. Syst. 11, 48–57 (2013). https://doi.org/10.1007/s12555-011-0026-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12555-011-0026-5

Keywords

Navigation