Advertisement

ADAGIO: Interactive Experimentation with Adversarial Attack and Defense for Audio

  • Nilaksh DasEmail author
  • Madhuri Shanbhogue
  • Shang-Tse Chen
  • Li Chen
  • Michael E. Kounavis
  • Duen Horng Chau
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11053)

Abstract

Adversarial machine learning research has recently demonstrated the feasibility to confuse automatic speech recognition (ASR) models by introducing acoustically imperceptible perturbations to audio samples. To help researchers and practitioners gain better understanding of the impact of such attacks, and to provide them with tools to help them more easily evaluate and craft strong defenses for their models, we present Adagio, the first tool designed to allow interactive experimentation with adversarial attacks and defenses on an ASR model in real time, both visually and aurally. Adagio incorporates AMR and MP3 audio compression techniques as defenses, which users can interactively apply to attacked audio samples. We show that these techniques, which are based on psychoacoustic principles, effectively eliminate targeted attacks, reducing the attack success rate from 92.5% to 0%. We will demonstrate Adagio and invite the audience to try it on the Mozilla Common Voice dataset. Code related to this paper is available at: https://github.com/nilakshdas/ADAGIO.

Keywords

Adversarial ML Security Speech recognition 

References

  1. 1.
    Carlini, N., Wagner, D.: Audio adversarial examples: targeted attacks on speech-to-text. arXiv:1801.01944 (2018)
  2. 2.
    Das, N., et al.: Keeping the bad guys out: protecting and vaccinating deep learning with jpeg compression. arXiv:1705.02900 (2017)
  3. 3.
    Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)Google Scholar
  4. 4.
    Hannun, A., et al.: Deep speech: scaling up end-to-end speech recognition. arXiv:1412.5567 (2014)
  5. 5.

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Nilaksh Das
    • 1
    Email author
  • Madhuri Shanbhogue
    • 1
  • Shang-Tse Chen
    • 1
  • Li Chen
    • 2
  • Michael E. Kounavis
    • 2
  • Duen Horng Chau
    • 1
  1. 1.Georgia Institute of TechnologyAtlantaUSA
  2. 2.Intel CorporationHillsboroUSA

Personalised recommendations