Multi-sensor-based detection and tracking of moving objects for relative position estimation in autonomous driving conditions

  • Jinwoo KimEmail author
  • Yonggeon Choi
  • MyungWook Park
  • Sangwoo Lee
  • Sunghoon Kim


Moving object detection (MOD) technology was combined to include detection, tracking and classification which provides information such as the local and global position estimation and velocity from around objects in real time at least 15 fps. To operate an autonomous driving vehicle on real roads, a multi-sensor-based object detection and classification module should carry out simultaneously processing in the autonomous system for safe driving. Additionally, the object detection results must have high-speed processing performance in a limited HW platform for autonomous vehicles. To solve this problem, we used the Redmon in DARKNET-based ( deep learning method to modify a detector that obtains the local position estimation in real time. The aim of this study was to get the local position information of a moving object by fusing the information from multi-cameras and one RADAR. Thus, we made a fusion server to synchronize and converse the information of multi-objects from multi-sensors on our autonomous vehicle. In this paper, we introduce a method to solve the local position estimation that recognizes the around view which includes the long-, middle- and short-range view. We also describe a method to solve the problem caused by a steep slope and a curving road condition while driving. Additionally, we introduce the results of our proposed MOD-based detection and tracking estimation to achieve a license for autonomous driving in KOREA.


Moving object detection Deep learning Local position estimation Sensor fusion 



This work was supported by Institute for Information and communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2016-0-00004, Development of Driving Computing System Supporting Real-time Sensor Fusion Processing for Self-Driving Car).


  1. 1.
    Gebremeskel GB, Chai Y, Yang Z (2014) The paradigm of big data for augmenting internet of vehicle into the intelligent cloud computing systems. In: Internet of vehicles—technologies and services, pp 247–261Google Scholar
  2. 2.
    Alam KM, Saini M, El Saddik A (2014) A social network of vehicles under internet of things. In: Internet of vehicles—technologies and services, pp 227–236Google Scholar
  3. 3.
    Finogeev AG, Parygin DS, Finogeev AA (2017) The convergence computing model for big sensor data mining and knowledge discovery. In: Human-centric computing and information sciences 2017Google Scholar
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
    Li B, Zhang T, Xia T (2016) Vehicle detection from 3D Lidar using fully convolutional network. In: Computer vision and pattern recognitionGoogle Scholar
  9. 9.
    Viola P, Jones M (2001) Robust real-time object detection. In: International journal of computer visionGoogle Scholar
  10. 10.
    Edgar S, Jean-Bernard H (2017) Probabilistic global scale estimation for MonoSLAM based on generic object detection. In: Workshop on visual odometry, computer vision and pattern recognition 2017Google Scholar
  11. 11.
  12. 12.
    Redmon J, Farhadi A (2016) YOLO9000: better, faster, stronger. In: Computer vision and pattern recognitionGoogle Scholar
  13. 13.
    Galvez D, Opez L, Salas M, Tardós JD, Montiel J (2016) Real-time monocular object slam. Robot Auton Syst 75:435–449CrossRefGoogle Scholar
  14. 14.
    Salas Moreno R, Newcombe R, Strasdat H, Kelly P, Davison A (2013) Simultaneous localization and mapping at the level of objects. In: Computer vision and pattern recognitionGoogle Scholar
  15. 15.
    Fu Y, Wang C (2018) Moving object localization based on UHF RFID phase and laser clustering. Sensors 18:825CrossRefGoogle Scholar
  16. 16.
    Zhong Z (2018) Camera radar fusion for increased reliability in ADAS applications. Soc Imaging Sci Technol 2018:258–262Google Scholar
  17. 17.
    Liang M, Yang B, Wang S, Urtasun R (2018) Deep continuous fusion for multi-sensor 3D object detection. In: ECCV, pp 641–656Google Scholar
  18. 18.
    Thakur R (2016) Scanning LIDAR in advanced driver assistance systems and beyond: building a road map for next-generation LIDAR technology. IEEE Consum 5:48–54CrossRefGoogle Scholar
  19. 19.
    Munir A (2017) Safety assessment and design of dependable cybercars: for today and the future. IEEE Consum 6:69–77CrossRefGoogle Scholar
  20. 20.
    Francesco P, Cristian Z, Andrea N, Piero O, Sergio S (2018) Is consumer electronics redesigning our cars? Challenges of integrated technologies for sensing, computing, and storage. IEEE Consum Electron Mag 7:8–17Google Scholar
  21. 21.
    Scaramuzza DF (2009) Absolute scale in structure from motion from a single vehicle mounted camera by exploiting nonholonomic constraints. In: International Conference of Computer VisionGoogle Scholar
  22. 22.
    Davison AJ (2003) Real-time simultaneous localization and mapping with a single camera. In: International Conference of Computer Vision, FranceGoogle Scholar
  23. 23.
    Kim J (2018) Multi-camera based local position estimation for moving objects detection. In: BigComp 2018, Shanghai, ChinaGoogle Scholar
  24. 24.
    Park M, Lee S, Han W (2015) Development of steering control system for autonomous vehicle using geometry-based path tracking algorithm. ETRI J 37:617–625CrossRefGoogle Scholar
  25. 25.
    Noh S et al (2015) Co-pilot agent for vehicle/driver cooperative and autonomous driving. ETRI J 37:1032–1043CrossRefGoogle Scholar
  26. 26.
  27. 27.
    NVIDIA, CUDA Technology.

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Electronics and Telecommunications Research Institute (ETRI)DaejeonRepublic of Korea

Personalised recommendations