DOI Number:10.32604/jcs.2025.063606
Title of Paper:Improving Security-Sensitive Deep Learning Models through Adversarial Training and Hybrid Defense Mechanisms
Journal:Journal of Cyber Security
Key Words:Adversarial training; hybrid defense mechanisms; deep learning robustness; security-sensitive
applications; adversarial attacks mitigation
Abstract:Deep learning models have achieved remarkable successin healthcare, ffnance, and autonomoussystems,
yet theirsecurity vulnerabilitiesto adversarial attacksremain a critical challenge.TTis paper presents a novel dual-phase
defense framework that combines progressive adversarial trainingwith dynamic runtime protection to address evolving
threats. Our approach introduces three key innovations: multi-stage adversarial training with TRADES (Tradeoffinspired
Adversarial Defense via Surrogate-loss minimization) loss that progressively scales perturbation strength,
maintaining 85.10% clean accuracy on CIFAR-10 (Canadian Institute for Advanced Research 10-class dataset) while
improving robustness; a hybrid runtime defense integrating feature manipulation, statistical anomaly detection, and
adaptive ensemble learning; and a 40% reduction in computational costs compared to (Projected Gradient Descent)
PGD-based methods. Experimental results demonstrate state-of-the-art performance, achieving 66.50% adversarial
accuracy on CIFAR-10 (outperforming TRADES by 12%) and 70.50% robustness against FGSM (Fast Gradient Sign
Method) attacks on GTSRB (German Trafffc Sign Recognition Benchmark). Statistical validation (p < 0.05) conffrms
the reliability of these improvements across multiple attack scenarios. TTe framework’s signiffcance lies in its practical
deployability for security-sensitive applications: in autonomous systems, it prevents adversarial spooffng of trafffc signs
(89.20% clean accuracy on GTSRB); in biometric security, it resists authentication bypass attempts; and in ffnancial
systems, it maintainsfraud detection accuracy under attack. Unlike existing defensesthat trade robustnessfor efffciency,
our method simultaneously optimizes both through its unique combination of proactive training and reactive runtime
mechanisms. TTis work provides a foundational advancement in adversarial defense, offering a scalable solution for
protecting AI systems in healthcare diagnostics, intelligent transportation, and other critical domains where model
integrity is paramount. TTe proposed framework establishes a new paradigm for developing attack-resistant deep
learning systems without compromising computational practicality.
First Author:文学志
Indexed by:Journal paper
Correspondence Author:Eric Danso
Volume:7
Issue:1
Translation or Not:no
Date of Publication:2025-05-08
· Paper Publications
Improving Security-Sensitive Deep Learning Models through Adversarial Training and Hybrid Defense Mechanisms
Date of Publication:2025-05-09Hits:
