Advertisement

Journal of Signal Processing Systems

, Volume 88, Issue 2, pp 219–231 | Cite as

A Container-Based Elastic Cloud Architecture for Pseudo Real-Time Exploitation of Wide Area Motion Imagery (WAMI) Stream

  • Ryan Wu
  • Bingwei Liu
  • Yu ChenEmail author
  • Erik Blasch
  • Haibin Ling
  • Genshe Chen
Article

Abstract

Real-time information fusion based on WAMI (Wide-Area Motion Imagery), FMV (Full Motion Video), and text data is highly desired for many mission critical emergency or military applications. However, due to the huge data rate, it is still infeasible to process streaming WAMI in a real-time manner and achieve the goal of online, uninterrupted target tracking. In this paper, a pseudo-real-time Dynamic Data Driven Applications System (DDDAS) WAMI data stream processing scheme is proposed. Taking advantage of the temporal and spatial locality properties, a divide-and-conquer strategy is adopted to overcome the challenge resulting from the large amount of dynamic data. In the Pseudo Real-time Exploitation of Sub-Area (PRESA) framework, each WAMI frame is divided into multiple sub-areas and specified sub-areas are assigned to the virtual machines in a container-based cloud computing architecture, which allows dynamic resource provisioning to meet the performance requirements. A prototype has been implemented and the experimental results validate the effectiveness of our approach.

Keywords

WAMI (wide-area motion imagery) Dynamic data-driven application systems Pseudo-real-time processing Container-based Cloud 

Notes

Acknowledgements

This work is supported by the US Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP) and the grant from AFOSR in Dynamic Data-Driven Application Systems. Ryan Wu was a summer undergraduate AFRL research fellow.

The authors also want to express our gratitude to Dr. Erkang Cheng for his valuable suggestions and discussions on SIFT data set and algorithms.

References

  1. 1.
    Blasch E., Bosse E., Lambert D. A. (2012). High-level information fusion management and systems design, Artech House, Norwood, MA.Google Scholar
  2. 2.
    Blasch E., Steinberg A., Das S., Llinas J., Chong C.-Y., Kessler O., Waltz E., White F. (2013). Revisiting the JDL model for information exploitation, Int’l Conf. on Info Fusion.Google Scholar
  3. 3.
    O. Mendoza-Schrock, Patrick, J. A., et al. (2009). Video image registration evaluation for a layered sensing environment, Proc. IEEE Nat. Aerospace Electronics Conf. (NAECON).Google Scholar
  4. 4.
    Blasch E., Yang C., Kadar I. (2014). Summary of tracking and identification methods, Proc. SPIE, Vol. 9119.Google Scholar
  5. 5.
    Porter R., Ruggiero C., Morrison J. D. (2009). A framework for activity detection in wide-area motion imagery, Proc. SPIE, Vol. 7341.Google Scholar
  6. 6.
    Porter, R., Fraser, A. M., & Hush, D. (September 2010). Wide-area motion imagery: narrowing the semantic gap. IEEE Signal Processing Magazine, 27(5), 56–65.CrossRefGoogle Scholar
  7. 7.
    Blasch E., Seetharaman G., Suddarth S., Palaniappan K., Chen G., Ling H., Basharat A. (2014). Summary of methods in wide-area motion imagery (WAMI), Proc. SPIE, Vol. 9089.Google Scholar
  8. 8.
    Asari V. K. (ed.) (2014). Wide area surveillance: real-time motion detection systems, section augmented vision and reality, Vol. 6, Springer. http://link.springer.com/book/10.1007%2F978-3-642-37841-6.
  9. 9.
    Kahler B. and Blasch E. (2008). Sensor management fusion using operating conditions, Proc. IEEE Nat. Aerospace Electronics Conf (NAECON).Google Scholar
  10. 10.
    Blasch E., Steinberg A., Das S., Llinas J., Chong C.-Y., Kessler O., Waltz E., and White F. (2013). Revisiting the JDL model for information exploitation, Int’l Conf. on Info Fusion.Google Scholar
  11. 11.
    Sun Z. H., Leotta M., Hoogs A. J., Blue R., et al. (2014). Vehicle change detection from aerial imagery using detection response maps, Proc. SPIE, Vol. 9089.Google Scholar
  12. 12.
    Darema F. (2000). DDDAS workshop groups. Creating a dynamic and symbiotic coupling of application/simulations with measurements/experiments. NSF DDDAS 2000 Workshop. Available via www.1dddas.org [accessed Jan 2015].
  13. 13.
    Darema, F. (2005). Grid computing and beyond: the context of dynamic data driven applications systems. Proceedings IEEE, 93(3), 692–697.CrossRefGoogle Scholar
  14. 14.
    Blasch, E., Seetharaman, G., & Reinhardt, K. (2013). Dynamic data driven applications system concept for information fusion. Procedia Computer Science, 18, 1999–2007.CrossRefGoogle Scholar
  15. 15.
    Blasch E., Seetharaman G., Darema F. (2013). Dynamic data driven applications systems (dddas) modeling for automatic target recognition, Proc. SPIE, Vol. 8744.Google Scholar
  16. 16.
    Ravela, S. (2012). Quantifying uncertainty for coherent structures. Procedia Computer Science, 9, 1187–1196.CrossRefGoogle Scholar
  17. 17.
    Liu, B., Blasch, E., Chen, Y., Hadiks, A., Shen, D., Chen, G., & Aved, A. J. (2014). Information fusion in a cloud computing era: a systems-level perspective. Aerospace and Electronic Systems Magazine, IEEE, 19(10), 16–24.CrossRefGoogle Scholar
  18. 18.
    Liu K., Liu B., Blasch E., Shen D., Wang Z., Ling H., Chen G. (2015). A cloud infrastructure for target detection and tracking using audio and video fusion, IEEE CVPR Workshop.Google Scholar
  19. 19.
    Wu R., Chen Y., Blasch E., Liu B., Chen G., and Shen D. (2014). A container-based elastic cloud architecture for real-time full-motion video (FMV) target tracking, Applied Imagery Pattern Recognition Workshop (AIPR) IEEE, vol., no., pp. 1,8, 14–16 Oct. 2014Google Scholar
  20. 20.
    Pritt M. D., LaTourette K. J. (2011). Automated georegistration of motion imagery, Applied Imagery Pattern Recognition.Google Scholar
  21. 21.
    Wu Y., Chen G., Blasch E., Ling H. (2012). Feature based background registration in wide area motion imagery, Proc. SPIE, Vol. 8402.Google Scholar
  22. 22.
    Palaniappan K., Bunyak F., Kumar P., Ersoy I., Jeager S., Ganguli K., Haridas A., Fraser J., Rao R. M., and Seetharaman G., (2010). Efficient feature extraction and likelihood fusion for vehicle tracking in low frame rate airborne video, Intl. Conf. on Information Fusion.Google Scholar
  23. 23.
    Perera, A. G. A., Collins, R., & Hoods, A. (2008). Evaluation of compression schemes for wide area video. IEEE Applied Imagery Pattern Recognition Workshop.Google Scholar
  24. 24.
    Irvine J. M., Israel S. A. (2012). Quantifying interpretability loss due to image compression, Ch. 3 in Video Compression, A. Punchihewa (Ed.), InTech.Google Scholar
  25. 25.
    Blasch E., Seetharaman G., Russell S. (2011).Wide-Area Video Exploitation (WAVE) Joint data management (JDM) for layered sensing, Proc. SPIE, Vol. 8050.Google Scholar
  26. 26.
    Blasch E., Russell S., Seetharaman G. (2011). Joint data management for MOVINT data-to-decision making, Int. Conf. on Info Fusion.Google Scholar
  27. 27.
    Ling H., Wu Y., Blasch E., Chen G., Bai L. (2011). Evaluation of visual tracking in extremely low frame rate wide area motion imagery, Int. Conf. on Info Fusion.Google Scholar
  28. 28.
    Wu Y., Ling H., Blasch E., Chen G., Bai L. (2011) .Visual tracking based on log-Euclidean Riemannian sparse representation, Int. Symp. on Adv. in Visual Computing - Lecture Notes in Computer Science.Google Scholar
  29. 29.
    Liang P., Teodoro G., Ling H., Blasch E., Chen G., Bai L. (2012). Multiple kernel learning for vehicle detection in wide area motion imagery, Int. Conf. on Info Fusion.Google Scholar
  30. 30.
    Palaniappan E K., Seetharaman G., Rao R. M. (2012). Interactive target tracking for persistent wide-area surveillance, Proc. SPIE, Vol. 8396.Google Scholar
  31. 31.
    Mathew, V., & Asari, K. (2012). Local histogram based descriptor for tracking in wide area imagery. Wireless Networks and Computational Intelligence Comm. in Computer and Information Science, 292(2012), 119–128.CrossRefGoogle Scholar
  32. 32.
    Prokaj J., Zhao X., Medioni G. (2012). Tracking many vehicles in wide area aerial surveillance, IEEE Conf. on Computer Vision and Pattern Recognition Workshop (CVPRW).Google Scholar
  33. 33.
    Liu, K., Du, Q., Yang, H., & Ma, B. (2010). Optical flow and principal component analysis-based motion detection in outdoor videos. EURASIP Journal on Advances in Signal Processing, 2010, 1.Google Scholar
  34. 34.
    Liu K., Yang H., Ma B., and Du Q. (2010). A joint optical flow and principal component analysis approach for motion detection. In Acoustics Speech and Signal Processing (ICASSP), 2010 I.E. International Conference on, pp. 1178–1181. IEEE.Google Scholar
  35. 35.
    Choi J., Dumortier Y., Prokaj J., Medioni, G. (2012). Activity recognition in wide aerial video surveillance using entity relationship models, 2012. In International Conference on Advances in GIS, SIGSPATIAL, pages 466–469.Google Scholar
  36. 36.
    Shi X., Ling H., Blasch E., Hu W. (2012). Context-driven moving vehicle detection in wide area motion imagery, Int’l Conf. on Pattern Recognition (ICPR).Google Scholar
  37. 37.
    Blasch E., Seetharaman G., Palaniappan K., Ling H., Chen G. (2012). Wide-area motion imagery (WAMI) exploitation tools for enhanced situation awareness, IEEE Applied Imagery Pattern Recognition Workshop.Google Scholar
  38. 38.
    Liang P., Shen D., Blasch E., Pham K., Wang Z., Chen G., Ling H. (2013). Spatial context for moving vehicle detection in wide area motion imagery with multiple kernel learning. Proc. SPIE, Vol. 8751.Google Scholar
  39. 39.
    Liang P., Ling H., Blasch E., Seetharaman G., Shen D., Chen G. (2013). Vehicle detection in wide area aerial surveillance using temporal context, Int’l Conf. on Info Fusion.Google Scholar
  40. 40.
    Santhaseelan V., Asari V. K. (2013). Tracking in wide area motion imagery using phase vector fields, IEEE Conf. on Computer Vision and Pattern Recognition Workshop (CVPRW).Google Scholar
  41. 41.
    Gao J., Ling H., Blasch E., Pham K., Wang Z., Chen G. (2013) Pattern of life from WAMI objects tracking based on visual context-aware tracking and infusion network models, Proc. SPIE, Vol. 8745.Google Scholar
  42. 42.
    Shi X., Li P., Hu W. et al., (2013). Using maximum consistency context for multiple target Association in Wide Area Traffic Scenes, Int’l Conf. on Acoustics, Speech and Signal Processing (ICASSP).Google Scholar
  43. 43.
    Pang Y., Shen D., Chen G., Liang P., et al. (2013). Low frame rate video target localization and tracking testbed, Proc. SPIE, Vol. 8742.Google Scholar
  44. 44.
    Basharat, A, Turek, M., Xu, Y., Atkins, C., Stoup, D., Fieldhouse, K., Tunison, P., Hoogs, A. (2014). Real-time multi-target tracking at 210 megapixels/second in wide area motion imagery, IEEE Winter Conf. on Apps. of Computer Vision (WACV).Google Scholar
  45. 45.
    Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.CrossRefGoogle Scholar
  46. 46.
    Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.MathSciNetCrossRefGoogle Scholar
  47. 47.
    B. Liu, Y. Chen, D. Shen, G. Chen, K. Pham, E. Blasch, and B. Rubin, “An adaptive process-based cloud infrastructure for space situational awareness applications,” in Proc. SPIE, vol. 9085, 2014.Google Scholar
  48. 48.
    SWSoft, “Openvz server virtualization,” http://www.openvz.org/, 2006.
  49. 49.
    Columbus Large Image Format (CLIF). (2006). Dataset. https://www.sdms.afrl.af.mil/index.php? collection = clif2006.
  50. 50.
    Hytla P. C., Jackovitz K.S., Balster E.J., Vasquez J. R., Talbert M. L. (2012) Detection and tracking performance with compressed wide area motion imagery, IEEE Nat. Aerospace and Electronics Conference.Google Scholar
  51. 51.
    Liu, K., Ma, B., Du, Q., & Chen, G. (2012). Fast motion detection from airborne videos using graphics processing unit. Journal of Applied Remote Sensing, 6(1), 061505–061501.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Ryan Wu
    • 1
  • Bingwei Liu
    • 1
  • Yu Chen
    • 1
    Email author
  • Erik Blasch
    • 2
  • Haibin Ling
    • 3
  • Genshe Chen
    • 4
  1. 1.Department of Electrical & Computer EngineeringBinghamton University, SUNYBinghamtonUSA
  2. 2.Air Force Research LaboratoryRomeUSA
  3. 3.Department of Computer & Information SciencesTemple UniversityPhiladelphiaUSA
  4. 4.Intelligent Fusion Technology, Inc.GermantownUSA

Personalised recommendations