Skip to main content

Manipulierbare KI – Ein unüberwindbares Hindernis für die Sicherheit autonomer Fahrzeuge?

Zusammenfassung

Autonom fahrende Autos gelten als eine der größten kommenden Entwicklungen für die Mobilität und den Verkehr der Zukunft. Eine flächendeckende Einführung und Nutzung erfordern ein hohes Maß an Zuverlässigkeit der selbstfahrenden Fahrzeuge. Insbesondere die visuelle Sensorik und Bilderkennung autonomer Fahrzeuge sind von entscheidender Bedeutung, um eine sichere Verkehrsführung zu gewährleisten.

Schlüsselwörter

  • Autonomes Fahren
  • Deep Learning
  • Convolutional Neural Networks

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-658-32266-3_27
  • Chapter length: 16 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   99.00
Price excludes VAT (USA)
  • ISBN: 978-3-658-32266-3
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   129.99
Price excludes VAT (USA)
Abb. 1

(Eigene Darstellung)

Abb. 2

(Eigene Darstellung)

Abb. 3

(Eigene Darstellung)

Literatur

  • Athalye, A., Engstrom, L., Ilyas, A., & Kwok, K. (2018). Synthesizing robust adversarial examples, https://arxiv.org/pdf/1707.07397v3. Zugegriffen: 12. Jun. 2020.

  • Buckman, J., Roy, A., Raffael, C., & Goodfellow, I. (2018). Thermometer encoding: One hot way to resist adversarial examples. In: ICLR 2018 Conference Blind Submission.

    Google Scholar 

  • Buhrmester, V., Münch, D., & Arens, M. (2019). Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey, https://arxiv.org/pdf/1911.12116v1. Zugegriffen: 12. Jun. 2020.

  • Carlini, N., & Wagner, D. (2017a). Adversarial examples are not easily detected. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security – AISec '17, Thuraisingham, Bhavani; Biggio, Battista; Freeman, David; Miller, Brad; Sinha, Arunesh (Hrsg.), Dallas, USA.

    Google Scholar 

  • Carlini, N., & Wagner, D. (2017b). Towards evaluating the robustness of neural networks. https://arxiv.org/pdf/1608.04644v2. Zugegriffen: 11. Jun. 2020.

  • Castelvecchi, D. (2016). Can we open the black box of AI? Nature, 538(7623), 20–23.

    CrossRef  Google Scholar 

  • Chernikova, A., Oprea, A., Nita-Rotaru, C., & Kim, B. (2019). Are self-driving cars secure? evasion attacks against deep neural networks for steering angle prediction, In SPW 2019. 2019 IEEE Symposium on Security and Privacy Workshops: Proceedings, San Francisco, USA.

    Google Scholar 

  • Deniz, O., Pedraza, A., Vallez, N., Salido, J., & Bueno, G. (2020). Robustness to adversarial examples can be improved with overfitting. International Journal of Machine Learning and Cybernetics, 11(4), 935–944.

    CrossRef  Google Scholar 

  • Ekedebe, N., Lu, C., & Yu, W. (2015). Towards experimental evaluation of intelligent Transportation System safety and traffic efficiency, In 2015 IEEE International Conference on Communications (ICC), London, Großbritannien.

    Google Scholar 

  • Fan, W., Sun, G., Su, Y., Liu, Z., & Lu, X. (2019). Hybrid defense for deep neural networks: an integration of detecting and cleaning adversarial perturbations, In 2019 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Shanghai, China.

    Google Scholar 

  • Fujiyoshi, H., Hirakawa, T., & Yamashita, T. (2019). Deep learning-based image recognition for autonomous driving. IATSS Research, 43(4), 244–252.

    CrossRef  Google Scholar 

  • Goodfellow, I., Shlens, & J., Szegedy, C. (2014). Explaining and harnessing adversarial examples. https://arxiv.org/pdf/1412.6572v3. Zugegriffen: 12. Jun. 2020.

  • Graese, A., Rozsa, A., Boult, T. (2016). Assessing threat of adversarial examples on deep neural networks, In 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), Anaheim, USA.

    Google Scholar 

  • Guo, C., Rana, M., Cisse, M., & van der Maaten, L. (2018). Countering Adversarial Images using Input Transformations. https://arxiv.org/pdf/1711.00117v3. Zugegriffen: 11. Jun. 2020.

  • He, D., Zeadally, S., Xu, B., & Huang, X. (2015). An efficient identity-based conditional privacy-preserving authentication scheme for vehicular ad hoc networks. IEEE Transactions on Information Forensics and Security, 10(12), 2681–2691.

    CrossRef  Google Scholar 

  • Hutson, M. (2018). Hackers easily fool artificial intelligences. Science, 361(6399), 215.

    CrossRef  Google Scholar 

  • Isaac, J., Zeadally, S., & Cámara, J. (2010). Security Attacks and Solutions for Vehicular Ad Hoc Networks. IET Communications, 4(7), 894–903.

    CrossRef  Google Scholar 

  • Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Zitnick, C., & Girshick, R. (2016). CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning, https://arxiv.org/pdf/1612.06890v1. Zugegriffen: 12. Jun. 2020.

  • Kariyappa, S., & Qureshi, M. (2019). Improving adversarial robustness of ensembles with diversity training. https://arxiv.org/pdf/1901.09981v1. Zugegriffen: 11. Jun. 2020.

  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.

    CrossRef  Google Scholar 

  • Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D., & Jana, S. (2019). Certified Robustness to Adversarial Examples with Differential Privacy, In 2019 IEEE Symposium on Security and Privacy (SP), San Francisco, USA.

    Google Scholar 

  • Lei, Ao., Cruickshank, H., Cao, Y., Asuquo, P., Ogah, C., & Sun, Z. (2017). Blockchain-Based Dynamic Key Management for Heterogeneous Intelligent Transportation Systems. IEEE Internet of Things Journal, 4(6), 1832–1843.

    CrossRef  Google Scholar 

  • Leiding, B., Memarmoshrefi, P., & Hogrefe, D. (2016). Self-managed and blockchain-based vehicular ad-hoc networks, In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Lukowicz, Paul; Krüger, Antonio; Bulling, Andreas; Lim, Youn-Kyung; Patel, Shwetak (Hrsg.), Heidelberg, Deutschland.

    Google Scholar 

  • Mokhtar, B., & Azab, M. (2015). Survey on Security Issues in Vehicular Ad Hoc Networks. Alexandria Engineering Journal, 54(4), 1115–1126.

    CrossRef  Google Scholar 

  • National Transportation Safety Board (2018): Preliminary Report HWY18MH010, https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf. Zugegriffen: 11. Jun. 2020.

  • Ogundoyin, S. (2020). An autonomous lightweight conditional privacy-preserving authentication scheme with provable security for vehicular ad-hoc networks. International Journal of Computers and Applications, 42(2), 196–211.

    CrossRef  Google Scholar 

  • Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z., & Swami, A. (2017). Practical black-box attacks against machine learning, In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, Karri, Ramesh; Sinanoglu, Ozgur; Sadeghi, Ahmad-Reza; Yi, Xun (Hrsg.), Abu Dhabi, Vereinigte Arabische Emirate.

    Google Scholar 

  • Pericherla, S., Duvvuru, N., & Jayagopi, D. (2019). Improving adversarial images using activation maps, In 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China.

    Google Scholar 

  • Raschka, S., & Mirjalili, V. (2018). Python machine learning: Machine learning and deep learning with Python, scikit-learn, and TensorFlow. Großbritannien: Birmingham.

    Google Scholar 

  • Ray, P.; Chakrabarti, A. (2019). A mixed approach of deep learning method and rule-based method to improve aspect level sentiment analysis. In: Applied Computing and Informatics.

    Google Scholar 

  • Sari, A., Onursal, O., & Akkaya, M. (2015). Review of the Security Issues in Vehicular Ad Hoc Networks (VANET). International Journal of Communications, Network and System Sciences, 08(13), 552–566.

    CrossRef  Google Scholar 

  • Strauss, T., Hanselmann, M., Junginger, A., & Ulmer, H. (2018). Ensemble methods as a defense to adversarial perturbations against deep neural networks, https://arxiv.org/pdf/1709.03423v2. Zugegriffen: 10. Jun. 2020.

  • Tan, M., & Le, Q. (2019). EfficientNet: Rethinking model scaling for convolutional neural networks. https://arxiv.org/pdf/1905.11946v3. Zugegriffen: 12. Jun. 2020.

  • van Uytsel, S. (2019). Legislating autonomous vehicles against the backdrop of adversarial machine learning findings. In 2019 IEEE International Conference on Connected Vehicles and Expo (ICCVE), Graz, Österreich.

    Google Scholar 

  • Xu, H., Ma, Y., Liu, H., Deb, D., Liu, H., Tang, J., & Jain, A. (2019). Adversarial attacks and defenses in images, graphs and text: A review, https://arxiv.org/pdf/1909.08072v2. Zugegriffen: 10. Jun. 2020.

  • Xu, W., Evans, D., & Qi, Y. (2018). Feature squeezing: Detecting adversarial examples in deep neural networks, In Proceedings 2018 Network and Distributed System Security Symposium, Traynor, Patrick; Oprea, Alina (Hrsg.), San Diego, USA.

    Google Scholar 

  • Yuan, X., He, P., Zhu, Q., & Li, X. (2017). Adversarial examples: Attacks and defenses for deep learning, https://arxiv.org/pdf/1712.07107v3. Zugegriffen: 12. Jun 2020.

  • Zhang, J., Wang, F.-Y., Wang, K., Lin, W.-H., Xu, X., & Chen, C. (2011). Data-Driven Intelligent Transportation Systems: A Survey. IEEE Transactions on Intelligent Transportation Systems, 12(4), 1624–1639.

    CrossRef  Google Scholar 

  • Zhao, Y., Zhu, H., Liang, R., Shen, Q., Zhang, S., & Chen, K. (2019). Seeing isn't believing: Towards more robust adversarial attack against real world object detectors, In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, Cavallaro, Lorenzo; Kinder, Johannes; Wang, XiaoFeng; Katz, Jonathan (Hrsg.), London, Großbritannien.

    Google Scholar 

  • Zheng, Y., Yun, H., Wang, F., Ding, Y., Huang, Y., & Liu, W. (2019). defence against adversarial attacks using clustering algorithm. In H. Cheng, W. Jing, X. Song, & Z. Lu, (Hrsg.) Data Science, Singapur.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marko Kureljusic .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 Der/die Autor(en), exklusiv lizenziert durch Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature

About this chapter

Verify currency and authenticity via CrossMark

Cite this chapter

Kureljusic, M., Karger, E., Ahlemann, F. (2021). Manipulierbare KI – Ein unüberwindbares Hindernis für die Sicherheit autonomer Fahrzeuge?. In: Proff, H. (eds) Making Connected Mobility Work. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-32266-3_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-658-32266-3_27

  • Published:

  • Publisher Name: Springer Gabler, Wiesbaden

  • Print ISBN: 978-3-658-32265-6

  • Online ISBN: 978-3-658-32266-3

  • eBook Packages: Business and Economics (German Language)