Skip to main content

Location and Fusion Algorithm of High-Rise Building Rescue Drill Scene Based on Binocular Vision

  • Conference paper
  • First Online:
Cyberspace Data and Intelligence, and Cyber-Living, Syndrome, and Health (CyberDI 2019, CyberLife 2019)

Abstract

In the emergency rescue exercise of high-rise buildings, mastering the accurate position of the participants is an important means for coaches to arrange tactics, evaluate the efficiency of rescue aid, evaluate the effect and ensure the safety of the participants. Video location is a more accurate positioning method, using personnel detection, personnel tracking can lock the position of personnel in the monitoring, but once occlusive, personnel can not be detected, will cause the loss of personnel identity information, another problem is that the current technology is difficult to stably identify the identity of personnel through signs. Therefore, this paper studies the fusion algorithm based on the characteristics that the most widely used WiFi fingerprint location can provide rough position information and personnel identity information. The detection with identity information is obtained by matching the personnel information provided by the WiFi fingerprint location system with the detected personnel in the video. At the same time, the location result of WiFi fingerprint can provide reference position when occlusive for a long time. Aiming at the characteristics of fixed number of participants and fixed identity information in emergency rescue exercise, this paper proposes a personnel tracking algorithm based on appearance and motion characteristics. This algorithm reduces the incidence of identity exchange problem when the personnel are very close, and records the representation information of the participants for a long time, which can make the personnel can be rerecognized after a long period of disappearance, and avoid the problem of matching error caused by multiple matching of WiFi fingerprint information and video location information.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Wenjuan, L.: The 13th five-year plan for the construction of emergency response system will establish a unified framework of emergency management standard system. Stand. Eng. Constr. 2(5), 99–110 (2017)

    Google Scholar 

  2. Polito, S., Biondo, D.: Performance evaluation of active RFID location systems based on RF power measures. In: IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (2007)

    Google Scholar 

  3. Cruz, O., Ramos, E., Ramírez, M.: 3D indoor location and navigation system based on Bluetooth. In: International Conference on Electrical Communications & Computers (2011)

    Google Scholar 

  4. Yu, K., Montillet, J.P., Rabbachin, A.: UWB location and tracking for wireless embedded networks. Signal Process. 86(9), 2153–2171 (2006)

    Article  Google Scholar 

  5. Zheng, Y., Wu, C., Liu, Y.: Locating in fingerprint space: wireless indoor localization with little human intervention. In: International Conference on Mobile Computing & Networking (2012)

    Google Scholar 

  6. Barnich, O., Van Droogenbroeck, M.: ViBe: a universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 20(6), 1709–1724 (2011)

    Article  MathSciNet  Google Scholar 

  7. Hofmann, M., Tiefenbacher, P., Rigoll, G.: Background segmentation with feedback: the pixel-based adaptive segmenter. In: Computer Vision & Pattern Recognition Workshops (2012)

    Google Scholar 

  8. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision & Pattern Recognition (2005)

    Google Scholar 

  9. Vidhyalakshmi, M.K., Poovammal, E.: A survey on face detection and person re-identification. 1, 283–292 (2016)

    Google Scholar 

  10. Leibe, B., Seemann, E., Schiele, B.: Pedestrian detection in crowded scenes. In: IEEE Computer Society Conference on Computer Vision & Pattern Recognition (2005)

    Google Scholar 

  11. Li, J., Liang, X., Shen, S.M.: Scale-aware fast R-CNN for pedestrian detection. IEEE Trans. Multimed. PP(99), 1 (2015)

    Google Scholar 

  12. Zhang, L., Lin, L., Liang, X., He, K.: Is faster R-CNN doing well for pedestrian detection? In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 443–457. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_28

    Chapter  Google Scholar 

  13. Mao, J., Xiao, T., Jiang, Y.: What can help pedestrian detection? In: Computer Vision & Pattern Recognition (2017)

    Google Scholar 

  14. Liu, W., Liao, S., Ren, W.: High-level semantic feature detection: a new perspective for pedestrian detection. arXiv:1904.02948 [cs.CV]

  15. Feng, W., Hu, Z., Wu, W.: Multi-object tracking with multiple cues and switcher-aware classification. arXiv:1901.06129 [cs.CV]

  16. Bergmann, P., Meinhardt, T., Leal-Taixe, L.: Tracking without bells and whistles. arXiv:1903.0562 5 [cs.CV]

  17. Wojke, N., Bewley, A., Paulus, D.: Simple online and realtime tracking with a deep association metric. arXiv:1703.07402 [cs.CV]

  18. Lee, B., Erdenee, E., Jin, S., Nam, M.Y., Jung, Y.G., Rhee, P.K.: Multi-class multi-object tracking using changing point detection. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9914, pp. 68–83. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-48881-3_6

    Chapter  Google Scholar 

  19. Sawhney, H.S., Kumar, R.: True multi-image alignment and its application to mosaicing and lens distortion correction. In: Conference on Computer Vision & Pattern Recognition (1997)

    Google Scholar 

  20. Yoneyama, S., Kikuta, H.: Lens distortion correction for digital image correlation by measuring rigid body displacement. Opt. Eng. 45(2), 409–411 (2006)

    Article  Google Scholar 

  21. Miyaki, T., Yamasaki, T., Aizawa, K.: Visual tracking of pedestrians jointly using Wi-Fi location system on distributed camera network. In: 2007 IEEE International Conference on Multimedia and Expo, pp. 1762–1765. IEEE (2007)

    Google Scholar 

  22. Rafiee, M.: Improving indoor security surveillance by fusing data from BIM, UWB and Video. Concordia University (2014)

    Google Scholar 

Download references

Acknowledgement

This study was supported by State’s Key Project of Research and Development Plan (No. 2018YFC0810601, No. 2016YFC0901303). The work was conducted at University of Science and Technology Beijing.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiguo Shi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ma, J., Shi, Z. (2019). Location and Fusion Algorithm of High-Rise Building Rescue Drill Scene Based on Binocular Vision. In: Ning, H. (eds) Cyberspace Data and Intelligence, and Cyber-Living, Syndrome, and Health. CyberDI CyberLife 2019 2019. Communications in Computer and Information Science, vol 1137. Springer, Singapore. https://doi.org/10.1007/978-981-15-1922-2_32

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-1922-2_32

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-1921-5

  • Online ISBN: 978-981-15-1922-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics