ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack

Dahoon Park, Kon Woo Kwon, Sunghoon Im, Jaeha Kung

Research output: Contribution to conferencePaperpeer-review

Abstract

In this paper, we present Zero-data Based Repeated bit flip Attack (ZeBRA) that precisely destroys deep neural networks (DNNs) by synthesizing its own attack datasets. Many prior works on adversarial weight attack require not only the weight parameters, but also the training or test dataset in searching vulnerable bits to be attacked. We propose to synthesize the attack dataset, named distilled target data, by utilizing the statistics of batch normalization layers in the victim DNN model. Equipped with the distilled target data, our ZeBRA algorithm can search vulnerable bits in the model without accessing training or test dataset. Thus, our approach makes the adversarial weight attack more fatal to the security of DNNs. Our experimental results show that 2.0× (CIFAR-10) and 1.6× (ImageNet) less number of bit flips are required on average to destroy DNNs compared to the previous attack method. Our code is available at https://github.com/pdh930105/ZeBRA.

Original languageEnglish
StatePublished - 2021
Event32nd British Machine Vision Conference, BMVC 2021 - Virtual, Online
Duration: 22 Nov 202125 Nov 2021

Conference

Conference32nd British Machine Vision Conference, BMVC 2021
CityVirtual, Online
Period22/11/2125/11/21

Bibliographical note

Publisher Copyright:
© 2021. The copyright of this document resides with its authors.

Fingerprint

Dive into the research topics of 'ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack'. Together they form a unique fingerprint.

Cite this