ADVERSARIAL ATTACKS ON MACHINE LEARNING MODELS USING FACE WARPING

Date

2022-04-21

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Recent successful adversarial attacks on face recognition show that, despite the remarkable progress of face recognition models, they are still far behind the human intelligence for perception and recognition. Furthermore, it reveals the vulnerability of deep convolutional neural networks (CNNs) as the state-of-the-art building block for face recognition models against adversarial examples, which can cause inevitable consequences for secure systems.

Gradient-based adversarial attacks have been widely studied before and proved successful against face recognition models. However, finding the optimized perturbation per face requires submitting a significant number of queries to the target model.

In this research, we propose a recursive adversarial attack on face recognition using automatic face warping, which requires a limited number of queries to fool the target model. Instead of a random face warping procedure, the warping functions are applied to specific detected regions of the face like eyebrows, nose, and lips. We evaluate the robustness of the proposed method in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but the target model provides hard-label predictions and confidence scores. Additionally, we would also be releasing a novel dataset consisting of celebrity images and warped images for researchers to use.

Description

Keywords

Adversarial attack, Face Recognition, Machine learning

Citation

Portions of this document appear in: Kasichainula, Keshav, Hadi Mansourifar, and Weidong Shi. "Privacy preserving proxy for machine learning as a service." In 2020 IEEE International Conference on Big Data (Big Data), pp. 4006-4015. IEEE, 2020; and in: Kasichainula, Keshav, Hadi Mansourifar, and Weidong Shi. "Poisoning Attacks via Generative Adversarial Text to Image Synthesis." In 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), pp. 158-165. IEEE, 2021.