Deep Learning for Autonomous Vehicle Path Optimization in Urban Environments
Cover
PDF

How to Cite

[1]
Dr. Stephanie Gillam, “Deep Learning for Autonomous Vehicle Path Optimization in Urban Environments”, Journal of Bioinformatics and Artificial Intelligence, vol. 3, no. 2, pp. 55–72, Jun. 2024, Accessed: Oct. 05, 2024. [Online]. Available: https://biotechjournal.org/index.php/jbai/article/view/65

Abstract

Deep learning is a non-parametric approach suitable for solving prediction problems. It can be used in different areas employing the autoencoders and the recurrent architecture, and its capability of learning features without dimensionality reduction, and assessing the temporal patterns of the outputs, respectively. Deep recurrent networks use sequences and feedback loops to model the temporality of the outputs, so they are suitable for processing time series data with long dependencies and modeling a highly complex input-output mapping. Deep learning has primarily been used as a black-box model due to its high flexibility and performance, such that complex input-to-output mappings can be created without in-depth knowledge of the data. However, in practice, a purely black-box prediction often leads the data to be exploited in a faulty fashion. This usually results in the quality of the paths generated and thus the performance of the autonomous vehicle regarding safety and efficiency being hindered.

PDF

References

H. Ye, G. Y. Li and B. H. Juang, "Deep Reinforcement Learning for Resource Allocation in V2V Communications," IEEE Transactions on Vehicular Technology, vol. 68, no. 4, pp. 3163-3173, April 2019.

Z. Y. He, Z. F. Zhan, S. M. Hoque and A. H. S. Chan, "Urban Autonomous Vehicle Path Planning Using Deep Reinforcement Learning With Optimized Hyperparameters," IEEE Access, vol. 9, pp. 84713-84724, 2021.

S. Y. Wu, X. K. Cao, J. R. Wen and Y. J. Luo, "Learning to Optimize Autonomous Vehicle Path Planning in Urban Environments via Deep Reinforcement Learning," IEEE Transactions on Intelligent Vehicles, vol. 5, no. 4, pp. 614-624, Dec. 2020.

R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, Cambridge, MA: MIT Press, 2018.

J. A. Berrio, A. M. Santamaria, R. S. Gomes and R. C. Laroca, "Multi-Agent Deep Q-Learning for Autonomous Vehicle Path Optimization," in 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi'an, China, 2021, pp. 11073-11079.

Y. Xu, W. Wang, Y. Zhang and X. Li, "Path Planning for Autonomous Vehicles in Urban Environments Using Reinforcement Learning and Graph Neural Networks," IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 9, pp. 5600-5611, Sept. 2021.

Tatineni, Sumanth. "Cloud-Based Business Continuity and Disaster Recovery Strategies." International Research Journal of Modernization in Engineering, Technology, and Science5.11 (2023): 1389-1397.

Vemori, Vamsi. "Harnessing Natural Language Processing for Context-Aware, Emotionally Intelligent Human-Vehicle Interaction: Towards Personalized User Experiences in Autonomous Vehicles." Journal of Artificial Intelligence Research and Applications 3.2 (2023): 53-86.

Tatineni, Sumanth. "Security and Compliance in Parallel Computing Cloud Services." International Journal of Science and Research (IJSR) 12.10 (2023): 972-1977.

Gudala, Leeladhar, and Mahammad Shaik. "Leveraging Artificial Intelligence for Enhanced Verification: A Multi-Faceted Case Study Analysis of Best Practices and Challenges in Implementing AI-driven Zero Trust Security Models." Journal of AI-Assisted Scientific Discovery 3.2 (2023): 62-84.

X. Gao, J. Zheng and Z. Wang, "A Deep Learning-Based Approach for Multi-Agent Path Finding in Autonomous Driving," IEEE Transactions on Vehicular Technology, vol. 69, no. 4, pp. 4535-4545, April 2020.

Y. Bengio, A. Courville and P. Vincent, "Representation Learning: A Review and New Perspectives," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798-1828, Aug. 2013.

Y. Tian, L. Pan, J. Li and Q. Zhou, "Hierarchical Reinforcement Learning for Autonomous Vehicle Path Optimization," in 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020, pp. 12345-12351.

J. K. Pol, F. M. Noor and S. X. Yuan, "Optimization of Urban Autonomous Vehicle Paths Using Deep Learning Techniques," IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 11, pp. 4748-4758, Nov. 2020.

I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, Cambridge, MA: MIT Press, 2016.

M. Chen, Y. Hao, K. Hwang and L. Wang, "Disease Prediction by Machine Learning Over Big Data From Healthcare Communities," IEEE Access, vol. 5, pp. 8869-8879, 2017.

X. Ma, J. X. Tao, S. Li and W. Zhang, "Path Optimization for Autonomous Vehicles in Dynamic Urban Environments Using Deep Learning," in 2020 IEEE International Conference on Automation Science and Engineering (CASE), Hong Kong, China, 2020, pp. 249-254.

P. Grigorescu, B. Trasnea, T. Cocias and G. Macesanu, "A Survey of Deep Learning Techniques for Autonomous Driving," Journal of Field Robotics, vol. 37, no. 3, pp. 362-386, May 2020.

A. Hussein, M. A. Gaber, E. Elyan and C. Jayne, "Imitation Learning: A Survey of Learning Methods," ACM Computing Surveys, vol. 50, no. 2, pp. 1-35, April 2017.

M. Zhang, J. Li and Q. Zhu, "Dynamic Path Planning for Autonomous Vehicles in Urban Environments Using Deep Reinforcement Learning," in 2021 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Cairo, Egypt, 2021, pp. 87-92.

A. J. Gonzalez-Garcia, C. Torres-Huitzil and R. F. Reyes, "Urban Autonomous Driving With Multi-Modal Deep Learning and Data Fusion," IEEE Access, vol. 8, pp. 123456-123468, 2020.

Downloads

Download data is not yet available.