Motashim Rasool,
Uvais Ahmad,
Rizwan Akhtar,
Shamim Ansari,
Saumya Singh,
- Professor, Department of Computer Applications, Integral University, Lucknow, UTTAR PRADESH, India
- Student, Department of Computer Applications, Integral University, Lucknow, UTTAR PRADESH, India
- Student, Department of Computer Applications, Integral University, Lucknow, UTTAR PRADESH, India
- Student, Department of Computer Applications, Integral University, Lucknow, UTTAR PRADESH, India
- Student, Department of Computer Applications, Integral University, Lucknow, UTTAR PRADESH, India
Abstract
Deep learning and reinforcement learning represent two pivotal pillars within the realm of artificial intelligence and machine learning, bearing transformative potential in the domain of the Internet of Vehicles (IoV). This abstract explores the multifaceted applications of these cutting-edge techniques within the IoV framework. Deep learning, exemplified by convolution neural networks (CNNs) and recurrent neural networks (RNNs), empowers IoV systems with the prowess to discern complex patterns in sensory data. This capability finds utility in tasks ranging from object recognition and lane detection for autonomous driving to emotion recognition in drivers. Furthermore, deep learning fuels driver assistance systems and enhances user interaction within the vehicle, encompassing voice and gesture control, facial recognition, and natural language interfaces. In parallel, reinforcement learning emerges as a potent paradigm for optimizing decision-making processes in IoV applications. Autonomous vehicles leverage reinforcement learning to navigate intricate traffic scenarios and make real-time driving decisions, mitigating risks and enhancing safety. Moreover, reinforcement learning drives energy-efficient routing for electric vehicles and facilitates dynamic pricing strategies for ride-sharing services, attuning them to ever-changing demand dynamics. As the IoV landscape evolves, deep learning and reinforcement learning stand as foundational cornerstones, propelling innovations that promise to reshape the future of transportation, rendering it safer, more efficient, and user-centric.
Keywords: : Internet of vehicle, deep learning, reinforcement learning, convolution neural network, recurrent neural network, generative adversial network, siamese network.
[This article belongs to International Journal of Advanced Robotics and Automation Technology ]
Motashim Rasool, Uvais Ahmad, Rizwan Akhtar, Shamim Ansari, Saumya Singh. Empowering Vehicle: The Impact of Deep and Reinforcement Learning in IoV. International Journal of Advanced Robotics and Automation Technology. 2025; 03(02):1-12.
Motashim Rasool, Uvais Ahmad, Rizwan Akhtar, Shamim Ansari, Saumya Singh. Empowering Vehicle: The Impact of Deep and Reinforcement Learning in IoV. International Journal of Advanced Robotics and Automation Technology. 2025; 03(02):1-12. Available from: https://journals.stmjournals.com/ijarat/article=2025/view=230272
References
- Lina Elmoiz Alatabani, Elmustafa Sayed Ali, Rania A. Mokhtar, Rashid A. Saeed, Hesham Alhumyani, Mohammad Kamrul Hasan, “[Retracted] Deep and Reinforcement Learning Technologies on Internet of Vehicle (IoV) Applications: Current Issues and Future Trends”, Journal of Advanced Transportation, vol. 2022, Article ID 1947886, 16 pages, 2022. https://doi.org/10.1155/2022/1947886
- Zhang, K.B. Letaief, Mobile Edge Intelligence and Computing for the Internet of Vehicles. Proc. IEEE 108(2), 246–261 (2020)
- Feng, Z. Liu, C. Wu, et al., Mobile Edge Computing for the Internet of Vehicles: Offloading Framework and Job Scheduling. IEEE Veh. Technol. Mag. 14(1), 28–36 (2019)
- Wang, X. Wang, X. Liu, et al., Task Offloading Strategy Based on Reinforcement Learning Computing in Edge Computing Architecture of Internet of Vehicles. IEEE ACCESS 8(1), 173779–173789 (2020)
- Liang et al., “Deep-Learning-Based Wireless Resource Allocation with Application to Vehicular Networks”, Proc. IEEE, vol. 108, no. 2, pp. 341-56, 2020
- [6] V. Mnih et al., “Human-Level Control Through Deep Reinforcement Learning”, Nature, vol. 518, no. 7540, pp. 529-33, Feb. 2015.
- Study on Evaluation Methodology of New Vehicle-to-Everything (V2X) Use Cases for LTE and NR, June 2019, [online] Available: https://portal.3gpp.org/desk-topmodules/Specifications/SpecificationDetails.aspx?spec-ificationld=3209.
- Haiyan Tu, Liqiang Zhao, Yaoyuan Zhang, Gan Zheng, Chen Feng, Shenghui Song, Kai Liang, “Deep Reinforcement Learning for Optimization of RAN Slicing Relying on Control- and User-Plane Separation”, IEEE Internet of Things Journal, vol.11, no.5, pp.8485-8498, 2024.
- -S. Lee and S. Lee, “Resource allocation for vehicular fog computing using reinforcement learning combined with heuristic information,” IEEE Internet of Things Journal, vol. 7, no. 10, 2020.
- Dai, D. Xu, Y. Lu, S. Maharjan, and Y. Zhang, “Deep reinforcement learning for edge caching and content delivery in internet of vehicles,” in Proceedings of the 2019 IEEE/CIC International Conference on Communications in China, ICCC), Changchun, China, August 2019.
- Lee, E.-K., Gerla, M., Pau, G., Lee, U., & Lim, J.-H. (2016). Internet of Vehicles: From intelligent grid to autonomous cars and vehicular fogs. International Journal of Distributed Sensor Networks, 12(9), 155014771666550. https://doi.org/10.1177/1550147716665500
- Feng, Z. L. C. W. et al. (n.d.). J. Feng, Z. Liu, C. Wu, et al., Mobile Edge Computing for the Internet of Vehicles: Offloading Framework and Job Scheduling. IEEE Veh. Technol. Mag. 14(1), 28–36 (2019).
- Elmoiz Alatabani, L., Sayed Ali, E., Mokhtar, R. A., Saeed, R. A., Alhumyani, H., & Kamrul Hasan, M. (2022). Deep and Reinforcement Learning Technologies on Internet of Vehicle (IoV) Applications: Current Issues and Future Trends. Journal of Advanced Transportation, 2022, 1–16. https://doi.org/10.1155/2022/1947886
- Chen, M., Tian, Y., Fortino, G., Zhang, J., & Humar, I. (2018). Cognitive Internet of Vehicles. Computer Communications, 120. https://doi.org/10.1016/j.comcom.2018.02.006
| Volume | 03 |
| Issue | 02 |
| Received | 25/03/2025 |
| Accepted | 24/05/2025 |
| Published | 10/07/2025 |
| Publication Time | 107 Days |
PlumX Metrics

