Reinforcement Learning Approaches for Autonomous Vehicle Navigation in Dynamic Environments
Cover
PDF

How to Cite

[1]
Dr. Fumihiko Matsuno, “Reinforcement Learning Approaches for Autonomous Vehicle Navigation in Dynamic Environments”, Journal of Bioinformatics and Artificial Intelligence, vol. 2, no. 1, pp. 52–67, Jun. 2024, Accessed: Nov. 09, 2024. [Online]. Available: https://biotechjournal.org/index.php/jbai/article/view/35

Abstract

Traditional autonomy in vehicles can be divided into perception, prediction, and policy decision-making—the vehicle first perceives the surroundings through various sensors, such as cameras, LIDAR, and radar, tracks relevant objects based on sensor information, uses the prediction model to predict the future position and behaviors of moving obstacles, and then outputs policy decisions to control the vehicle movement. In recent years, thanks to the rapid development of deep learning, autonomous vehicle navigation has evolved from the original architecture into an end-to-end manner, where learning-based methods play the dominant role in autonomous navigation [1].

PDF

References

Óscar Gil, A. Garrell, and A. Sanfeliu, "Social Robot Navigation Tasks: Combining Machine Learning Techniques and Social Force Model," 2021. ncbi.nlm.nih.gov

L. Kästner, X. Zhao, Z. Shen, and J. Lambrecht, "Obstacle-aware Waypoint Generation for Long-range Guidance of Deep-Reinforcement-Learning-based Navigation Approaches," 2021. [PDF]

Y. Tang, C. Zhao, J. Wang, C. Zhang et al., "Perception and Navigation in Autonomous Systems in the Era of Learning: A Survey," 2020. [PDF]

B. Udugama, "Review of Deep Reinforcement Learning for Autonomous Driving," 2023.

Tatineni, Sumanth. "Beyond Accuracy: Understanding Model Performance on SQuAD 2.0 Challenges." International Journal of Advanced Research in Engineering and Technology (IJARET) 10.1 (2019): 566-581.

Vemoori, Vamsi. "Comparative Assessment of Technological Advancements in Autonomous Vehicles, Electric Vehicles, and Hybrid Vehicles vis-à-vis Manual Vehicles: A Multi-Criteria Analysis Considering Environmental Sustainability, Economic Feasibility, and Regulatory Frameworks." Journal of Artificial Intelligence Research 1.1 (2021): 66-98.

Shaik, Mahammad, Srinivasan Venkataramanan, and Ashok Kumar Reddy Sadhu. "Fortifying the Expanding Internet of Things Landscape: A Zero Trust Network Architecture Approach for Enhanced Security and Mitigating Resource Constraints." Journal of Science & Technology 1.1 (2020): 170-192.

Vemori, Vamsi. "Human-in-the-Loop Moral Decision-Making Frameworks for Situationally Aware Multi-Modal Autonomous Vehicle Networks: An Accessibility-Focused Approach." Journal of Computational Intelligence and Robotics 2.1 (2022): 54-87.

J. Hossain, "Autonomous Driving with Deep Reinforcement Learning in CARLA Simulation," 2023. [PDF]

R. Trauth, A. Hobmeier, and J. Betz, "A Reinforcement Learning-Boosted Motion Planning Framework: Comprehensive Generalization Performance in Autonomous Driving," 2024. [PDF]

A. Kendall, J. Hawke, D. Janz, P. Mazur et al., "Learning to Drive in a Day," 2018. [PDF]

T. Ganegedara, L. Ott, and F. Ramos, "Learning to Navigate by Growing Deep Networks," 2017. [PDF]

W. Zhang, K. Zhao, P. Li, X. Zhu et al., "A Closed-Loop Perception, Decision-Making and Reasoning Mechanism for Human-Like Navigation," 2022. [PDF]

R. Bin Issa, M. Das, M. Saferi Rahman, M. Barua et al., "Double Deep Q-Learning and Faster R-CNN-Based Autonomous Vehicle Navigation and Obstacle Avoidance in Dynamic Environment," 2021. ncbi.nlm.nih.gov

K. Cheng, X. Long, K. Yang, Y. Yao et al., "GaussianPro: 3D Gaussian Splatting with Progressive Propagation," 2024. [PDF]

D. Paz, N. E. Ranganatha, S. K. Srinivas, Y. Yao et al., "Occlusion-Aware 2D and 3D Centerline Detection for Urban Driving via Automatic Label Generation," 2023. [PDF]

P. Cai, S. Wang, Y. Sun, and M. Liu, "Probabilistic End-to-End Vehicle Navigation in Complex Dynamic Environments with Multimodal Sensor Fusion," 2020. [PDF]

Z. Li, S. Yuan, X. Yin, X. Li et al., "Research into Autonomous Vehicles Following and Obstacle Avoidance Based on Deep Reinforcement Learning Method under Map Constraints," 2023. ncbi.nlm.nih.gov

Y. Shi, Y. Liu, Y. Qi, and Q. Han, "A Control Method with Reinforcement Learning for Urban Un-Signalized Intersection in Hybrid Traffic Environment," 2022. ncbi.nlm.nih.gov

T. Liu, X. Mu, X. Tang, B. Huang et al., "Dueling Deep Q Network for Highway Decision Making in Autonomous Vehicles: A Case Study," 2020. [PDF]

F. Carton, D. Filliat, J. Rabarisoa, and Q. Cuong Pham, "Evaluating Robustness over High Level Driving Instruction for Autonomous Driving," 2021. [PDF]

Y. Chen, C. Ji, Y. Cai, T. Yan et al., "Deep Reinforcement Learning in Autonomous Car Path Planning and Control: A Survey," 2024. [PDF]

Downloads

Download data is not yet available.