The Visual Computer

, Volume 29, Issue 9, pp 927–936 | Cite as

Inserting virtual pedestrians into pedestrian groups video with behavior consistency

  • Zhiguo Ren
  • Wenjing Gai
  • Fan Zhong
  • Julien Pettré
  • Qunsheng Peng
Original Article

Abstract

In this paper, we propose a novel approach to integrate virtual pedestrians into a scene of real pedestrian groups with behavior consistency, and this is achieved by dynamic path planning of virtual pedestrians. Rather than accounting for the local collision avoidance only, our approach is capable of finding an optimized path for each virtual pedestrian on his way based on the current global distribution of the real groups in the scene. The big challenge is due to the information of both position and velocity of real pedestrians in the video being unavailable; also the distribution of the groups in the scene may vary dynamically. We therefore need to detect and track real pedestrians on each frame of the video to acquire their distribution and motion information. We save this information by an efficient data structure, called environment grid. During the way of a virtual pedestrian, the respective agent frequently emits the detection rays through the environment cells to find the situation of the real pedestrians ahead of him and adjust the original path if necessary. Virtual pedestrians are merged into the video finally with the occlusion between virtual characters and the real pedestrians correctly presented. Experiment results on several scenarios demonstrate the effectiveness of the proposed approach.

Keywords

Mixed reality Agent-based simulation Steering methods Path planning 

Supplementary material

(WMV 4.3 MB)

References

  1. 1.
    Wu, J., Geyer, C., Rehe, J.M.: Real-time human detection using contour cues. In: Proceedings of the 2011 IEEE International Conference on Robotics and Automation, pp. 860–867. IEEE, New York (2011) CrossRefGoogle Scholar
  2. 2.
    Sudowe, P., Leibe, B.: Efficient use of geometric constraints for sliding-window object detection in video. In: Proceedings of International Conference on Computer Vision Systems, pp. 11–20. ACM, New York (2011) Google Scholar
  3. 3.
    Benenson, R., Mathias, M., Timofte, R., van Gool, L.: Pedestrian detection at 100 frames per second. In: Proceedings of Computer Vision and Pattern Recognition, pp. 2903–2910. IEEE, New York (2012) Google Scholar
  4. 4.
    Andriluka, M., Roth, S., Schiele, B.: People tracking-by-detection and people detection-by-tracking. In: Proceedings of Computer Vision and Pattern Recognition, pp. 1–8. IEEE, New York (2012) Google Scholar
  5. 5.
    Reichlin, F., Leibe, B., Koller-Meier, E., van Gool, L.: Online multiperson tracking-by-detection from a single, uncalibrated camera. IEEE Trans. Pattern Anal. Mach. Intell. 33, 1820–1833 (2011) CrossRefGoogle Scholar
  6. 6.
    Pellegrini, S., Ess, A., Schindler, K., van Gool, L.: You’ll never walk alone: modeling social behavior for multi-target tracking. In: Proceedings of 2009 IEEE 12th International Conference on Computer Vision, pp. 261–268. IEEE, New York (2009) CrossRefGoogle Scholar
  7. 7.
    Piccardi, M.: Background subtraction techniques: a review. In: Proceedings of 2004 IEEE International Conference on Systems, Man and Cybernetics, vol. 4, pp. 3099–3104. IEEE, New York (2004) Google Scholar
  8. 8.
    Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 22, 888–905 (2000) CrossRefGoogle Scholar
  9. 9.
    Malladi, R., Sethian, J.A., Vemuri, B.C.: Shape modeling with front propagation: a level set approach. IEEE Trans. Pattern Anal. Mach. Intell. 17, 158–175 (1995) CrossRefGoogle Scholar
  10. 10.
    Treuille, A., Cooper, S., Popović, Z.: Continuum crowds. ACM Trans. Graph. 25, 1160–1168 (2006) CrossRefGoogle Scholar
  11. 11.
    Jiang, H., Xu, W., Mao, T., Li, C., Xia, S., Wang, Z.: Continuum crowd simulation in complex environments. Comput. Graph. 34, 537–544 (2010) CrossRefGoogle Scholar
  12. 12.
    Henderson, L.F.: The statistics of crowd fluids. Nature 229, 381–383 (1971) CrossRefGoogle Scholar
  13. 13.
    Narain, R., Golas, A., Curtis, S., Lin, M.C.: Aggregate dynamics for dense crowd simulation. ACM Trans. Graph. 28(122), 1–8 (2009) CrossRefGoogle Scholar
  14. 14.
    Helbing, D., Molnar, P.: Social force model for pedestrian dynamics. Phys. Rev. E 51, 4282–4286 (1995) CrossRefGoogle Scholar
  15. 15.
    Reynolds, C.W.: Steering behaviors for autonomous characters. In: Proceedings of Game Developers Conference, pp. 763–782. Miller Freeman, San Francisco (1999) Google Scholar
  16. 16.
    Fiorini, P., Shiller, Z.: Motion planning in dynamic environments using velocity obstacles. Int. J. Robot. Res. 17, 760–772 (1998) CrossRefGoogle Scholar
  17. 17.
    van den Berg, J., Guy, S.J., Lin, M., Manocha, D.: Reciprocal n-body collision avoidance. In: Proceedings of the 14th International Symposium on Robotics Research, vol. 70, pp. 3–7. Springer, Berlin (2009) CrossRefGoogle Scholar
  18. 18.
    Pettré, J., Ondřej, J., Olivier, A.H., Cretual, A., Donikian, S.: Experiment-based modeling simulation and validation of interactions between virtual walkers. In: Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 189–198. ACM, New York (2009) CrossRefGoogle Scholar
  19. 19.
    Kim, S., Guy, S., Liu, W., Lin, M.: Predicting pedestrian trajectories using velocity-space reasoning. In: Proceedings of the 10th International Workshop on the Algorithmic Foundations of Robotics. Springer, Berlin (2012) Google Scholar
  20. 20.
    Azuma, R.: A survey of augmented reality. Presence, Teleoper. Virtual Environ. 6, 355–385 (1997) Google Scholar
  21. 21.
    Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A.: KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. In: Proceeding of ACM Symposium on User Interface Software and Technology, pp. 559–568. ACM, New York (2011) Google Scholar
  22. 22.
    Abad, F., Camahort, E., Viv, R.: On the integration of synthetic objects with real-world scenes. In: Proceedings of EUROGRAPHICS. IEEE, New York (2002) Google Scholar
  23. 23.
    Kim, H., Sohn, K.: 3D reconstruction from stereo images for interactions between real and virtual objects. Signal Process. Image Commun. 22, 61–75 (2005) CrossRefGoogle Scholar
  24. 24.
    Somasundaram, A., Parent, R.: Inserting synthetic characters into live-action scenes of multiple people. In: Proceedings of the 16th International Conference on Computer Animation and Social Agents, pp. 137. IEEE, New York (2003) CrossRefGoogle Scholar
  25. 25.
    Zhang, Y., Pettré, J., Ondřej, J., Qin, X., Peng, Q., Donikian, S.: Online inserting virtual characters into dynamic video scenes. Comput. Animat. Virtual Worlds 22, 499–510 (2011) CrossRefGoogle Scholar
  26. 26.
    Rosales, R., Sclaroff, S.: 3D trajectory recovery for tracking multiple objects and trajectory guided recognition of actions. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 123. IEEE, New York (1999) Google Scholar
  27. 27.
    Karamouzas, I., Overmars, M.: Simulating the local behaviour of small pedestrian groups. In: Proceedings of the 17th ACM Symposium on Virtual Reality Software and Technology, pp. 183–190. ACM, New York (2010) CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Zhiguo Ren
    • 1
  • Wenjing Gai
    • 1
  • Fan Zhong
    • 2
  • Julien Pettré
    • 3
  • Qunsheng Peng
    • 1
  1. 1.State Key Lab of CAD&CGZhejiang UniversityHangzhouChina
  2. 2.School of Computer Science and TechnologyShandong UniversityJinanChina
  3. 3.Mimetic teamINRIA RennesRennesFrance

Personalised recommendations