Quantcast
Viewing all articles
Browse latest Browse all 448

Intrinsic Biologically Plausible Adversarial Training

Artificial Neural Networks (ANNs) trained with Backpropagation (BP) excel in different daily tasks but have a dangerous vulnerability: inputs with small targeted perturbations, also known as adversarial samples, can drastically disrupt their performance. Adversarial training, a technique in which the training dataset is augmented with exemplary adversarial samples, is proven to mitigate this problem but comes at a high computational cost. In contrast to ANNs, humans are not susceptible to misclassifying these same adversarial samples, so one can postulate that biologically-plausible trained ANNs might be more robust against adversarial attacks. Choosing as a case study the biologically-plausible learning algorithm Present the Error to Perturb the Input To modulate Activity (PEPITA), we investigate this question through a comparative analysis with BP-trained ANNs on various computer vision tasks. We observe that PEPITA has a higher intrinsic adversarial robustness and, when adversarially trained, has a more favorable natural-vs-adversarial performance trade-off since, for the same natural accuracies, PEPITA's adversarial accuracies decrease in average only by 0.26% while BP's decrease by 8.05%.

Viewing all articles
Browse latest Browse all 448

Trending Articles