Generating Dual-Identity Face Impersonations with Generative Adversarial Networks: An Adversarial Attack Methodology
Keywords:
Generative Adversarial Networks (GANs), Adversarial Attacks, Face RecognitionAbstract
Background: Face recognition systems, powered by deep neural networks, are increasingly integral to security and user authentication applications. However, these systems are vulnerable to adversarial attacks, where carefully crafted inputs deceive the model. While existing research has explored attacks that cause misclassification (dodging) or impersonate a single target, a more complex threat involves generating a single face that can be successfully verified as two separate identities—a "dual-identity" attack.
Objective: This paper introduces and evaluates a novel methodology for crafting dual-identity face impersonations using Generative Adversarial Networks (GANs). Our objective is to develop an end-to-end framework capable of generating a single, visually plausible facial image that can successfully deceive a state-of-the-art face recognition system into matching it with two distinct, pre-selected target identities.
Methods: We propose a GAN-based architecture specifically designed for this attack. The core of our contribution is a novel dual-identity loss function that simultaneously maximizes the similarity score with two different target identities while minimizing the visual perturbation to a source image. The methodology leverages a momentum-iterative algorithm to enhance attack strength and ensure high transferability to black-box models. We trained and evaluated our system using the Labeled Faces in the Wild (LFW) dataset against several state-of-the-art face recognition models, including ArcFace and GhostFaceNets.
Results: Our proposed method achieved a high Attack Success Rate (ASR), successfully fooling target models into verifying the generated image as both identities in a significant percentage of test cases. The attack also demonstrated strong transferability to black-box systems. Qualitative results show that the generated adversarial faces remain visually coherent and inconspicuous, making them practical for stealthy attacks.
Conclusion: The ability to generate dual-identity impersonations represents a significant evolution in adversarial threats against biometric security. Our findings underscore a critical vulnerability in current face recognition systems and highlight the urgent need for developing more robust defense mechanisms against sophisticated, GAN-driven adversarial attacks.
References
Alansari, M., Hay, O. A., Javed, S., Shoufan, A., Zweiri, Y., & Werghi, N. 2023. Ghostfacenets: lightweight face recognition model from cheap operations. IEEE Access 11:35429–46.
Arjovsky, M., & Bottou, L. 2017. Towards principled methods for training generative adversarial networks. In Proceedings of the 5th International Conference on Learning Representations. ArXiv.
Baluja, S., & Fischer, I. 2018. Learning to attack: adversarial transformation networks. In Proceedings of the AAAI Conference on Artificial Intelligence.
Carlini, N., & Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE. 39–57.
Deb, D., Zhang, J., & Jain, A. K. 2020. Advfaces: adversarial face synthesis. In 2020 IEEE International Joint Conference on Biometrics (IJCB). IEEE. 1–10.
Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., & Li, J. 2018. Boosting adversarial attacks with momentum. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 9185–93.
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. 2014. Generative adversarial nets. Advances in Neural Information Processing Systems 27.
Goodfellow, I. J., Shlens, J., & Szegedy, C. 2014. Explaining and harnessing adversarial examples. Preprint.
Gu, J., Jia, X., De Jorge, P., Yu, W., Liu, X., Ma, A., Xun, Y., Hu, A., Khakzar, A., & Li, Z. 2023. A survey on transferability of adversarial examples across deep neural networks. ArXiv.
Hangaragi, S., Singh, T., & N, N. 2023. Face detection and recognition using face mesh and deep neural network. Procedia Computer Science 218:741–49.
Hu, S., Liu, X., Zhang, Y., Li, M., Zhang, L. Y., Jin, H., & Wu, L. 2022. Protecting facial privacy: generating adversarial identity masks via style-robust makeup transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 15014–23.
Huang, G. B., Mattar, M., Berg, T., & Learned-Miller, E. 2008. Labeled faces in the wild: a database for studying face recognition in unconstrained environments. In Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition.
Huang, H., Wang, Y., Yuan, G., & Li, X. 2024. A Gaussian noise-based algorithm for enhancing backdoor attacks. Computers, Materials & Continua 80(1):361.
Komkov, S., & Petiushko, A. 2021. Advhat: real-world adversarial attack on arcface face ID system. In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE. 819–26.
Kortli, Y., Jridi, M., Al Falou, A., & Atri, M. J. S. 2020. Face recognition systems: a survey. Sensors 20(2):342.
Kurakin, A., Goodfellow, I. J., & Bengio, S. 2018. Adversarial examples in the physical world. In Artificial Intelligence Safety and Security. Chapman and Hall/CRC. 99–112.
Li, Z., Yin, B., Yao, T., Guo, J., Ding, S., Chen, S., & Liu, C. 2023. Sibling-attack: rethinking transferable adversarial attacks against face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 24626–37.
Liu, F., Chen, D., Wang, F., Li, Z., & Xu, F. 2023. Deep learning based single sample face recognition: a survey. Artificial Intelligence Review 56(3):2723–48.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. ArXiv.
Rai, A., Lall, B., Zalani, A., Prakash, R., & Srivastava, S. 2023. Enforcement of DNN with LDA-PCA-ELM for PIE invariant few-shot face recognition. In International Conference on Pattern Recognition and Machine Intelligence. Springer. 791–801.
Sharif, M., Bhagavatula, S., Bauer, L., & Reiter, M. K. 2016. Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 1528–40.
Sharma, P., Kumar, M., & Sharma, H. K. 2024. GAN-CNN ensemble: a robust deepfake detection model of social media images using minimized catastrophic forgetting and generative replay technique. Procedia Computer Science 235:948–60.
Sun, Y., Yu, L., Xie, H., Li, J., & Zhang, Y. 2024. DiffAM: diffusion-based adversarial makeup transfer for facial privacy protection. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 24584–94.
Wang, Y., Sun, T., Li, S., Yuan, X., Ni, W., Hossain, E., & Poor, H. V. 2023. Adversarial attacks and defenses in machine learning-empowered communication systems and networks: a contemporary survey. IEEE Communications Surveys & Tutorials 25(4):2245–98.
Xiao, C., Li, B., Zhu, J.-Y., He, W., Liu, M., & Song, D. 2018. Generating adversarial examples with adversarial networks. ArXiv.
Yang, X., Dong, Y., Pang, T., Su, H., Zhu, J., Chen, Y., & Xue, H. 2021. Towards face encryption by generating adversarial identity masks. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. 3897–3907.
Yang, X., Liu, C., Xu, L., Wang, Y., Dong, Y., Chen, N., Su, H., & Zhu, J. 2023. Towards effective adversarial textured 3D meshes on physical face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4119–28.
Yi, D., Lei, Z., Liao, S., & Li, S. Z. 2014. Learning face representation from scratch. ArXiv.
Yin, B., Wang, W., Yao, T., Guo, J., Kong, Z., Ding, S., Li, J., & Liu, C. 2021. Adv-MakeUP: a new imperceptible and transferable attack on face recognition. ArXiv.
Zhao, A., Chu, T., Liu, Y., Li, W., Li, J., & Duan, L. 2023. Minimizing maximum model discrepancy for transferable black-box targeted attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8153–62.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Dr. Aris Thorne (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain the copyright of their manuscripts, and all Open Access articles are disseminated under the terms of the Creative Commons Attribution License 4.0 (CC-BY), which licenses unrestricted use, distribution, and reproduction in any medium, provided that the original work is appropriately cited. The use of general descriptive names, trade names, trademarks, and so forth in this publication, even if not specifically identified, does not imply that these names are not protected by the relevant laws and regulations.