Enhancing the Robustness of Classification Against Adversarial Attacks through a Dual-Enhancement Strategy

Authors: Guo Niu, Shuaiwei Jiao, Nannan Zhu, Juxin Liao, Shengjun Deng, Tao Li, Xiongfei Yao, and Huanlin Mo
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 247-259
Keywords: Object classification, Adversarial Attacks, Multi-step Attacks,Adversarial Robustness, Robust Training.

Abstract

Deep neural networks have achieved remarkable success in target classification, but as accuracy improves, model robustness has become a growing concern. Existing methods, such as adversarial training, enhance robustness, yet adversarial examples can still lead to high-confidence, incorrect predictions. To address this issue, we propose a new defense mechanism—Dynamic MixCut. This method combines the advantages of multi-box CutMix and Mixup by enhancing the diversity and complexity in the sample generation process, enabling more effective defense against complex adversarial attacks, especially in dynamic perturbation environments. Through in-depth theoretical analysis, we reveal the fundamental reasons behind the robustness limitations of traditional Mixup under multi-step attacks, particularly the limitations of mixing adversarial perturbations between samples. Furthermore, the Dynamic MixCut method enhances the model's adaptability to diverse attack strategies by integrating more sophisticated perturbation designs in the generation of adversarial examples, thereby mitigating the trade-off between standard accuracy and adversarial robustness. Experimental results on the CIFAR-10 and SVHN datasets demonstrate that the Dynamic MixCut method improves adversarial accuracy by over 10 on average compared to the baseline while preserving standard accuracy. This research provides novel insights into robust training for object classification tasks and contributes to the advancement of adversarial training techniques.
📄 View Full Paper (PDF) 📋 Show Citation