Detecting Deepfake Voice Using Explainable Deep Learning Techniques

Research output: Contribution to journalArticlepeer-review

67 Scopus citations

Abstract

Fake media, generated by methods such as deepfakes, have become indistinguishable from real media, but their detection has not improved at the same pace. Furthermore, the absence of interpretability on deepfake detection models makes their reliability questionable. In this paper, we present a human perception level of interpretability for deepfake audio detection. Based on their characteristics, we implement several explainable artificial intelligence (XAI) methods used for image classification on an audio-related task. In addition, by examining the human cognitive process of XAI on image classification, we suggest the use of a corresponding data format for providing interpretability. Using this novel concept, a fresh interpretation using attribution scores can be provided.

Original languageEnglish
Article number3926
JournalApplied Sciences (Switzerland)
Volume12
Issue number8
DOIs
StatePublished - 1 Apr 2022

Bibliographical note

Publisher Copyright:
© 2022 by the authors. Licensee MDPI, Basel, Switzerland.

Keywords

  • deepfake detection
  • explainable artificial intelligence (XAI)
  • human-centered artificial intelligence

Fingerprint

Dive into the research topics of 'Detecting Deepfake Voice Using Explainable Deep Learning Techniques'. Together they form a unique fingerprint.

Cite this