Adversarial Attack Resilience of Autonomous Vehicle Perception Systems
Cover
PDF

How to Cite

[1]
Dr. Minh Nguyen, “Adversarial Attack Resilience of Autonomous Vehicle Perception Systems”, Journal of Bioinformatics and Artificial Intelligence, vol. 3, no. 1, pp. 53–70, Jun. 2024, Accessed: Nov. 21, 2024. [Online]. Available: https://biotechjournal.org/index.php/jbai/article/view/57

Abstract

With the increasing relevance of convolutional neural network (CNN) based perception for AVs, there has been a growing interest in understanding the vulnerability of CNNs to adversarial perturbations. Despite the majority of the work addressing image classification problems, novel defenses benefit general technologies. This research addresses an underrated possible percentage of the aerospace technology. To empirically estimate the effectiveness of adversarial training to alleviate the visual analytics vulnerabilities of AV perception modules, we conduct in the past exceedingly other specialized and transfer learning techniques.

PDF

References

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs], Apr. 2016.

A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial Examples in the Physical World,” arXiv:1607.02533 [cs], Jul. 2016.

A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards Deep Learning Models Resistant to Adversarial Attacks,” arXiv:1706.06083 [cs, stat], Jun. 2017.

W. Xu, D. Evans, and Y. Qi, “Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks,” arXiv:1704.01155 [cs, stat], Apr. 2017.

Tatineni, Sumanth. "Federated Learning for Privacy-Preserving Data Analysis: Applications and Challenges." International Journal of Computer Engineering and Technology 9.6 (2018).

Vemoori, V. “Towards Secure and Trustworthy Autonomous Vehicles: Leveraging Distributed Ledger Technology for Secure Communication and Exploring Explainable Artificial Intelligence for Robust Decision-Making and Comprehensive Testing”. Journal of Science & Technology, vol. 1, no. 1, Nov. 2020, pp. 130-7, https://thesciencebrigade.com/jst/article/view/224.

Mahammad Shaik, et al. “Envisioning Secure and Scalable Network Access Control: A Framework for Mitigating Device Heterogeneity and Network Complexity in Large-Scale Internet-of-Things (IoT) Deployments”. Distributed Learning and Broad Applications in Scientific Research, vol. 3, June 2017, pp. 1-24, https://dlabi.org/index.php/journal/article/view/1.

N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical Black-Box Attacks against Machine Learning,” in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, 2017, pp. 506–519.

N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical Black-Box Attacks against Machine Learning,” arXiv:1602.02697 [cs, stat], Feb. 2016.

N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The Limitations of Deep Learning in Adversarial Settings,” in 2016 IEEE European Symposium on Security and Privacy (EuroS&P), 2016, pp. 372–387.

N. Carlini and D. Wagner, “Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods,” arXiv:1705.07263 [cs, stat], May 2017.

A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial Machine Learning at Scale,” arXiv:1611.01236 [cs], Nov. 2016.

N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical Black-Box Attacks against Machine Learning,” in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, 2017, pp. 506–519.

N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical Black-Box Attacks against Machine Learning,” arXiv:1602.02697 [cs, stat], Feb. 2016.

N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The Limitations of Deep Learning in Adversarial Settings,” in 2016 IEEE European Symposium on Security and Privacy (EuroS&P), 2016, pp. 372–387.

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” arXiv:1608.06993 [cs], Aug. 2016.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2012, pp. 1097–1105.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” arXiv:1512.03385 [cs], Dec. 2015.

A. G. Howard et al., “Mobilenets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” arXiv:1704.04861 [cs], Apr. 2017.

N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks,” in 2016 IEEE Symposium on Security and Privacy (SP), 2016, pp. 582–597.

G. Hinton, O. Vinyals, and J. Dean, “Distilling the Knowledge in a Neural Network,” arXiv:1503.02531 [cs], Mar. 2015.

Downloads

Download data is not yet available.