Advertisement

Fusion of raw sensor data for testing applications in autonomous driving

  • Julius von FalkenhausenEmail author
  • Qi Liu
Conference paper
Part of the Proceedings book series (PROCEE)

Zusammenfassung

To achieve the goal of an autonomous driving vehicle, more and more sensors are being integrated in the vehicle. For the last generations it was sufficient, that those sensors did send object data, which was then fused into an environment model. However, in order to have a more accurate model and to be able to navigate autonomously, the raw sensor data is needed.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. [1].
    S. Ingle and M. Phute, “Tesla autopilot: semi autonomous driving, an uptick for future autonomy,” International Research Journal of Engineering and Technology, 2016.Google Scholar
  2. [2].
    S. L. Poczter and L. M. Jankovic, “The Google Car: driving toward a better future?,” Journal of Business Case Studies, 2014.Google Scholar
  3. [3].
    X. Chen, H. Ma, J. Wan, B. Li and T. Xia, “Multi-View 3D Object Detection Network for Autonomous Driving,” in IEEE CVPR, 2017, p. 3.Google Scholar
  4. [4].
    F. Castanedo, “A review of data fusion techniques,” The Scientific World Journal , 2013.Google Scholar
  5. [5].
    B. Khaleghi, A. Khamis, F. O. Karray and S. N. Razavi, “Multisensor data fusion: A review of the state-of-the-art,” Information fusion, 2013.Google Scholar
  6. [6].
    M. Aeberhard and N. Kaempchen, “High-level sensor data fusion architecture for vehicle surround environment perception,” in Proc. 8th Int. Workshop Intell. Transp, 2011.Google Scholar
  7. [7].
    J. Zhang, “Multi-source remote sensing data fusion: status and trends,” International Journal of Image and Data Fusion, pp. 5-24, 2010.CrossRefGoogle Scholar
  8. [8].
    B. Steux, C. Laurgeau, L. Salesse and D. Wautier, “Fade: a vehicle detection and tracking system featuring monocular color vision and radar data fusion,” in Intelligent Vehicle Symposium, 2002. IEEE, 2002, pp. 632-639.Google Scholar
  9. [9].
    S. Pietzsch, T. D. Vu, J. Burlet, O. Aycard, T. Hackbarth, N. Appenrodt, J. Dickmann and B. Radig, “Results of a precrash application based on laser scanner and short-range radars,” IEEE Transactions on Intelligent Transportation Systems, pp. 584-593, 2009.Google Scholar
  10. [10].
    M. Mahlisch, R. Schweiger, W. Ritter and K. Dietmayer, “Sensorfusion using spatio-temporal aligned video and lidar for improved vehicle detection,” Intelligent Vehicles Symposium, no. IEEE, 2006.Google Scholar
  11. [11].
    N. Kaempchen, “Feature level fusion of laser scanner and video data,” Doctoral dissertation, Universität Ulm, 2007.Google Scholar
  12. [12].
    J. Cesic, I. Markovic, I. Cvisic, and I. Petrovic, “Radar and stereo vision fusion for multitarget tracking on the special Euclidean group,” Robotics and Autonomous Systems, pp. 338-348, 2016.CrossRefGoogle Scholar
  13. [13].
    R. O. Chavez-Garcia and O. Aycard, “Multiple sensor fusion and classification for moving object detection and tracking,” IEEE Transactions on Intelligent Transportation Systems, pp. 525-534, 2016.CrossRefGoogle Scholar
  14. [14].
    P. K. S. L. Ellon Mendes, “ICP-based pose-graph SLAM,” in International Symposium on Safety, Security and Rescue Robotics (SSRR), pp.195 – 200, Lausanne, Switzerland, 2016.Google Scholar

Copyright information

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2020

Authors and Affiliations

  1. 1.BFFT Gesellschaft für Fahrzeugtechnik mbHGaimersheimDeutschland

Personalised recommendations