已经得到个称赞     给我点赞
  • 所在单位:计算机学院、网络空间安全学院(数字取证教育部工程研究中心、公共计算机教学部)
  • 性别:
  • 职称:副教授
  • 学科:计算机科学与技术
论文成果
当前位置: 中文主页 >> 科学研究 >> 论文成果
Improving Security-Sensitive Deep Learning Models through Adversarial Training and Hybrid Defense Mechanisms
  • 点击次数:
  • DOI码:10.32604/jcs.2025.063606
  • 发表刊物:Journal of Cyber Security
  • 关键字:Adversarial training; hybrid defense mechanisms; deep learning robustness; security-sensitive applications; adversarial attacks mitigation
  • 摘要:Deep learning models have achieved remarkable successin healthcare, ffnance, and autonomoussystems,
    yet theirsecurity vulnerabilitiesto adversarial attacksremain a critical challenge.TTis paper presents a novel dual-phase
    defense framework that combines progressive adversarial trainingwith dynamic runtime protection to address evolving
    threats. Our approach introduces three key innovations: multi-stage adversarial training with TRADES (Tradeoffinspired
     Adversarial Defense via Surrogate-loss minimization) loss that progressively scales perturbation strength,
    maintaining 85.10% clean accuracy on CIFAR-10 (Canadian Institute for Advanced Research 10-class dataset) while
    improving robustness; a hybrid runtime defense integrating feature manipulation, statistical anomaly detection, and
    adaptive ensemble learning; and a 40% reduction in computational costs compared to (Projected Gradient Descent)
    PGD-based methods. Experimental results demonstrate state-of-the-art performance, achieving 66.50% adversarial
    accuracy on CIFAR-10 (outperforming TRADES by 12%) and 70.50% robustness against FGSM (Fast Gradient Sign
    Method) attacks on GTSRB (German Trafffc Sign Recognition Benchmark). Statistical validation (p < 0.05) conffrms
    the reliability of these improvements across multiple attack scenarios. TTe framework’s signiffcance lies in its practical
    deployability for security-sensitive applications: in autonomous systems, it prevents adversarial spooffng of trafffc signs
    (89.20% clean accuracy on GTSRB); in biometric security, it resists authentication bypass attempts; and in ffnancial
    systems, it maintainsfraud detection accuracy under attack. Unlike existing defensesthat trade robustnessfor efffciency,
    our method simultaneously optimizes both through its unique combination of proactive training and reactive runtime
    mechanisms. TTis work provides a foundational advancement in adversarial defense, offering a scalable solution for
    protecting AI systems in healthcare diagnostics, intelligent transportation, and other critical domains where model
    integrity is paramount. TTe proposed framework establishes a new paradigm for developing attack-resistant deep
    learning systems without compromising computational practicality.
  • 第一作者:文学志
  • 论文类型:期刊论文
  • 通讯作者:Eric Danso
  • 卷号:7
  • 期号:1
  • 是否译文:
  • 发表时间:2025-05-08
  • 文学志_Improving Security-Sensitive Deep Learning Models through Adversarial Training and Hybrid Defense Mechanisms.pdf 下载[] 次