| ||||
| ||||
![]() Title:TVL-Filter: Total Variation Loss–Based Sample Filter for Efficient Adversarial Detection Conference:PRICAI 2025 Tags:Adversarial Attack, Adversarial Samples, Deep Learning, DNN, Robustness and Total Variation Loss Abstract: DNN models in computer vision are vulnerable to adversarial samples that are crafted with imperceptible perturbations, which can lead to unpredictable security risks. Currently, there are many countermeasures proposed in the literature to detect adversarial samples and mitigate their impact. However, these detection algorithms introduce significant computational overhead, limiting their practicality. To address this, two insights motivate this study: 1) for those deployed DNN models, the majority of inputs are benign samples that do not need to undergo detection; 2) the crafted perturbations of adversarial samples can be regarded as a type of high-frequency noise signal. To this end, we propose the Total Variation Loss–Based Sample Filter (TVL-Filter), a plug-in module designed for efficient adversarial detection, which employs the TV-loss value to evaluate samples' high-frequency noise signals, and filters out a significant portion of benign samples before detection accordingly. TVL-Filter helps to substantially reduce the adversarial detection overhead with an acceptable sacrifice of the detection precision. Our experiments indicate that after employing the TVL-Filter, three state-of-the-art detection algorithms achieve speedups of up to 8.73x, 8.32x, and 7.06x, with adversarial sample detection accuracy losses of only 2%, 2.90%, and 1.13%, respectively. TVL-Filter: Total Variation Loss–Based Sample Filter for Efficient Adversarial Detection ![]() TVL-Filter: Total Variation Loss–Based Sample Filter for Efficient Adversarial Detection | ||||
| Copyright © 2002 – 2025 EasyChair |
