Abstract
The segmentation of moving and non-moving regions in an image within the field of crowd analysis is a crucial process in terms of understanding crowd behavior. In many studies, similar movements were segmented according to the location, adjacency to each other, direction, and average speed. However, these segments may not in turn indicate the same types of behavior in each region. The purpose of this study is to better understand crowd behavior by locally measuring the degree of interaction/complexity within the segment. For this purpose, the flow of motion in the image is primarily represented as a series of trajectories. The image is divided into hexagonal cells and the finite time braid entropy (FTBE) values are calculated according to the different projection angles of each cell. These values depend on the complexity of the spiral structure that the trajectories generated throughout the movement and show the degree of interaction among pedestrians. In this study, behaviors of different complexities determined in segments are pictured as similar movements on the whole. This study has been tested on 49 different video sequences from the UCF and CUHK databases.
Similar content being viewed by others
References
Ali S, 2013. Measuring flow complexity in videos. Proc IEEE Int Conf on Computer Vision, p. 1097–1104. https://doi.org/10.1109/ICCV.2013.140
Ali S, Shah M, 2007. A Lagrangian particle dynamics approach for crowd flow segmentation and stability analysis. Proc IEEE Conf on Computer Vision and Pattern Recognition, p. 1–6. https://doi.org/10.1109/CVPR.2007.382977
Allshouse MR, 2010. Finding Lagrangian Structures via the Application of Braid Theory. Woods Hole Océanographie Institute, Massachusetts.
Allshouse MR, Thiffeault JL, 2012. Detecting coherent structures using braids. Phys D, 241(2):95–105. https://doi.org/10.1016/j.physd.2011.10.002
Barron JL, Fleet DJ, Beauchemin SS, 1994. Performance of optical flow techniques. Int J Comput Vis, 12@@1):43–77. https://doi.org/10.1007/BF01420984
Bouguet JY, 2001. Pyramidal implementation of the affine Lucas Kanade feature tracker description of the algorithm. Intel Go, 5(1–10):4.
Brox T, Bruhn A, Papenberg N, et al., 2004. High accuracy optical flow estimation based on a theory for warping. European Conf on Computer Vision, p.25–36. https://doi.org/10.1007/978-3-540-24673-2_3
Budišić M, Thiffeault JL, 2015. Finite-time braiding exponents. Chaos, 25(8):087407. https://doi.org/10.1063/L4927438
Chen M, Bärwolff G, Schwandt H, 2009. A derived grid-based model for simulation of pedestrian flow. J Zhejiang Univ-Sci A, 10(2):209–220. https://doi.org/10.1631/jzus.A0820049
Chen ML, Wang Q, Li XL, 2017. Anchor-based group detection in crowd scenes. IEEE Int Conf on Acoustics, Speech and Signal Processing, p. 1378–1382. https://doi.org/10.1109/ICASSP.2017.7952382
Cheriyadat AM, Radke RJ, 2008. Detecting dominant motions in dense crowds. IEEE J Sci Top Signal Process, 2(4):568–581. https://doi.org/10.1109/JSTSP.2008.2001306
de Almeida IR, Cassol VJ, Badler NI, et al., 2017. Detection of global and local motion changes in human crowds. IEEE Trans Girc Syst Video Technol, 27(3):603–612. https://doi.org/10.1109/TCSVT.2016.2596199
Fan ZY, Jiang J, Weng SQ, et al., 2018. Adaptive crowd segmentation based on coherent motion detection. J Signal Process Syst, 90(12):1651–1666. https://doi.org/10.1007/sll265-017-1309-8
Fradi H, Luvison B, Pham QC, 2017. Crowd behavior analysis using local mid-level visual descriptors. IEEE Trans Girc Syst Video Technol, 27(3):589–602. https://doi.org/10.1109/TCSVT.2016.2615443
Gao ML, Wang YT, Jiang J, et al., 2017. Crowd motion segmentation via streak flow and collectiveness. Chinese Automation Congress, p.4067–4070. https://doi.org/10.1109/CAC.2017.8243492
He GQ, Yang Y, Chen ZH, et al., 2013. A review of behavior mechanisms and crowd evacuation animation in emergency exercises. J Zhejiang Univ-Sci C @@Comput & Electron), 14(7):477–485. https://doi.org/10.1631/jzus.CIDE1301
Horn BKP, Schunck BG, 1981. Determining optical flow. Artif Intell, 17(1–3):185–203. https://doi.org/10.1016/0004-3702(81)90024-2
Hu M, Ali S, Shah M, 2008a. Detecting global motion patterns in complex videos. Proc 19th Int Conf on Pattern Recognition, p. 1–5. https://doi.org/10.1109/ICPR.2008.4760950
Hu M, Ali S, Shah M, 2008b. Learning motion patterns in crowded scenes using motion flow field. Proc 19th Int Conf on Pattern Recognition, p. 1–5. https://doi.org/10.1109/ICPR.2008.4761183
Hu WM, Tan TN, Wang L, et al., 2004. A survey on visual surveillance of object motion and behaviors. IEEE Trans Syst Man Gybern Part G, 34(3):334–352. https://doi.org/10.1109/TSMCC.2004.829274
Jodoin PM, Benezeth Y, Wang Y, 2013. Meta-tracking for video scene understanding. Proc 10th IEEE Int Conf on Advanced Video and Signal Based Surveillance, p. 1–6. https://doi.org/10.1109/AVSS.2013.6636607
Junior JCSJ, Musse SR, Jung CR, 2010. Crowd analysis using computer vision techniques. IEEE Signal Process Mag, 27(5):66–77. https://doi.org/10.1109/MSP.2010.937394
Li T, Chang H, Wang M, et al., 2015. Crowded scene analysis: a survey. IEEE Trans Girc Syst Video Technol, 25(3):367–386. https://doi.org/10.1109/TCSVT.2014.2358029
Lin WY, Mi Y, Wang WY, et al., 2016. A diffusion and clustering-based approach for finding coherent motions and understanding crowd scenes. IEEE Trans Image Process, 25(4):1674–1687. https://doi.org/10.1109/TIP.2016.2531281
Lucas BD, Kanade T, 1981. An iterative image registration technique with an application to stereo vision. Proc 7th Int Joint Conf on Artificial Intelligence, p.674–679.
Mehran R, Moore BE, Shah M, 2010. A streakline representation of flow in crowded scenes. Proc 11 European Conf on Computer Vision, p.439–452. https://doi.org/10.1007/978-3-642-15558-1_32
Moussafir JO, 2006. On computing the entropy of braids. Fund Anal Other Math, 1(1):37–46. https://doi.org/10.1007/sll853-007-0004-x
Rao AS, Gubbi J, Marusic S, et al., 2016. Crowd event detection on optical flow manifolds. IEEE Trans Cybern, 46(7):1524–1537. https://doi.org/10.1109/TCYB.2015.2451136
Saleemi I, Härtung L, Shah M, 2010. Scene understanding by statistical modeling of motion patterns. Proc IEEE Computer Society Conf on Computer Vision and Pattern Recognition, p.2069–2076. https://doi.org/10.1109/CVPR.2010.5539884
Shao J, Loy CC, Wang XG, 2017. Learning scene-independent group descriptors for crowd understanding. IEEE Trans Girc Syst Video Technol, 27(6):1290–1303. https://doi.org/10.1109/TCSVT.2016.2539878
Thida M, Yong YL, Climent-Pérez P, et al., 2013. A literature review on video analytics of crowded scenes. In: Atrey PK, Kankanhalli MS, Cavallaro A (Eds.), Intelligent Multimedia Surveillance: Current Trends and Research. Springer Berlin, p. 17–36. https://doi.org/10.1007/978-3-642-41512-8_2
Thiffeault JL, 2010. Braids of entangled particle trajectories. Chaos, 20(1):017516. https://doi.org/10.1063/L3262494
Thiffeault JL, Budisic M, 2014. Braidlab: a software package for braids and loops. https://doi.org/arxiv.org/abs/1410.0849v2
Wang XF, Yang XM, He XH, et al., 2014. A high accuracy flow segmentation method in crowded scenes based on streakline. Optik, 125(3):924–929. https://doi.org/10.1016/j.ijleo.2013.07.166
Wu S, Yang H, Zheng SB, et al., 2017. Crowd behavior analysis via curl and divergence of motion trajectories. Int J Corn-put Vis, 123(3):499–519. https://doi.org/10.1007/sll263-017-1005-y
Yang Y, Liu JE, Shah M, 2009. Video scene understanding using multi-scale analysis. Proc IEEE 12th Int Conf on Computer Vision, p.1669–1676. https://doi.org/10.1109/ICCV.2009.5459376
Yilmaz A, Javed O, Shah M, 2006. Object tracking: a survey. ACM Comput Surv, 38(4): 13. https://doi.org/10.1145/1177352.1177355
Yuan ZL, Jia HF, Liao MJ, et al., 2017. Simulation model of self-organizing pedestrian movement considering following behavior. Front Inform Technol Electron Eng, 18(8): 1142–1150. https://doi.org/10.1631/FITEE.1601592
Zhan BB, Monekosso DN, Remagnino P, et al., 2008. Crowd analysis: a survey. Mach Vis Appl, 19(5–6):345–357. https://doi.org/10.1007/s00138-008-0132-4
Zhao XM, Medioni G, 2011. Robust unsupervised motion pattern inference from video and applications. Proc Int Conf on Computer Vision, p.715–722. https://doi.org/10.1109/ICCV.2011.6126308
Zhao Y, Yuan MQ, Su GF, et al., 2015. Crowd macro state detection using entropy model. Phys A, 431:84–93. https://doi.org/10.1016/j.physa.2015.02.068
Zhou BL, Wang XG, Tang XO, 2011. Random field topic model for semantic region analysis in crowded scenes from tracklets. Proc IEEE Computer Society Conf on Computer Vision and Pattern Recognition, p.3441–3448. https://doi.org/10.1109/CVPR.2011.5995459
Zhou BL, Tang XO, Wang XG, 2012a. Coherent filtering: detecting coherent motions from crowd clutters. European Conf on Computer Vision, p.857–871. https://doi.org/10.1007/978-3-642-33709-3_61
Zhou BL, Wang XG, Tang XO, 2012b. Understanding collective crowd behaviors: learning a mixture model of dynamic pedestrian-agents. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.2871–2878.
Zhou BL, Tang XO, Wang XG, 2013. Measuring crowd collectiveness. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.3049–3056. https://doi.org/10.1109/CVPR.2013.392
Zitouni MS, Bhaskar H, Dias J, et al., 2016. Advances and trends in visual crowd analysis: a systematic survey and evaluation of crowd modelling techniques. Neurocom-puting, 186:139–159. https://doi.org/10.1016/j.neucom.2015.12.070
Author information
Authors and Affiliations
Corresponding author
Additional information
Project supported by the Gümüşhane University Scientific Research Projects Coordination Department (No. 15.B0311.02.01)
Rights and permissions
About this article
Cite this article
Akpulat, M., Ekinci, M. Detecting interaction/complexity within crowd movements using braid entropy. Frontiers Inf Technol Electronic Eng 20, 849–861 (2019). https://doi.org/10.1631/FITEE.1800313
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1631/FITEE.1800313
Key words
- Crowd behavior
- Motion segmentation
- Motion entropy
- Crowd scene analysis
- Complexity detection
- Braid entropy