Reinforcement Learning in Real World Application: A Study on Robotics; Autonomous Vehicles and Industrial Automation

[{“box”:0,”content”:”[if 992 equals=”Open Access”]

n

Open Access

n

[/if 992]n

n

Year : April 22, 2024 at 11:47 am | [if 1553 equals=””] Volume :01 [else] Volume :01[/if 1553] | [if 424 equals=”Regular Issue”]Issue[/if 424][if 424 equals=”Special Issue”]Special Issue[/if 424] [if 424 equals=”Conference”][/if 424] : 03 | Page : 1-15

n

n

n

n

n

n

By

n

    n t

    [foreach 286]n

    n

    B. Shyam Praveen

  1. [/foreach]

    n

n

n[if 2099 not_equal=”Yes”]n

    [foreach 286] [if 1175 not_equal=””]n t

  1. Assistant Professor, Department of Computer Science, Karpagam Academy of Higher Education, Coimbatore, Tamil Nadu, India
  2. n[/if 1175][/foreach]

[/if 2099][if 2099 equals=”Yes”][/if 2099]nn

n

Abstract

nThis research paper investigates the practical application of reinforcement learning (RL) in three critical domains: robotics, autonomous vehicles, and industrial automation. The study delves into the implementation of RL algorithms to enhance decision-making, adaptability, and autonomy in these real-world scenarios. Through a comprehensive review of existing literature, methodologies, and case studies, the paper addresses the challenges faced and the successes achieved in deploying RL in each domain. The findings offer valuable insights into the potential of RL to revolutionize robotics, autonomous vehicles, and industrial automation, paving the way for increased efficiency, adaptability, and performance in dynamic and complex environments. The paper concludes by highlighting key challenges, proposing future research directions, and emphasizing the significance of ongoing advancements in reinforcement learning for practical, transformative applications in these critical fields.

n

n

n

Keywords: Reinforcement learning (RL); real-world scenarios; robotics; autonomous vehicles; future research directions

n[if 424 equals=”Regular Issue”][This article belongs to International Journal of Advanced Robotics and Automation Technology(ijarat)]

n

[/if 424][if 424 equals=”Special Issue”][This article belongs to Special Issue under section in International Journal of Advanced Robotics and Automation Technology(ijarat)][/if 424][if 424 equals=”Conference”]This article belongs to Conference [/if 424]

n

n

n

How to cite this article: B. Shyam Praveen , Reinforcement Learning in Real World Application: A Study on Robotics; Autonomous Vehicles and Industrial Automation ijarat April 22, 2024; 01:1-15

n

How to cite this URL: B. Shyam Praveen , Reinforcement Learning in Real World Application: A Study on Robotics; Autonomous Vehicles and Industrial Automation ijarat April 22, 2024 {cited April 22, 2024};01:1-15. Available from: https://journals.stmjournals.com/ijarat/article=April 22, 2024/view=0

n


n[if 992 equals=”Open Access”] Full Text PDF Download[/if 992] nn

n[if 379 not_equal=””]n

Browse Figures

n

n

[foreach 379]n

n[/foreach]n

nn

n

n[/if 379]n

n

References

n[if 1104 equals=””]n

  1. Volantis, P., et al. (2019). “Deep Reinforcement Learning for Industrial Systems: A Comprehensive Review.” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 50(3), 913-930.
  2. Zhang, C., et al. (2020). “Reinforcement Learning in Autonomous Vehicle Systems: A Survey.” Information Fusion, 64, 115-132.
  3. Levine, S., et al. (2018). “Reinforcement Learning for Robot Intelligence: What the Human Brain Can Teach the Robot Brain.” IEEE Robotics & Automation Magazine, 25(3), 24-32.
  4. Sallab AE, Abdou M, Perot E, Yogamani S. End-to-end deep reinforcement learning for lane keeping assist. arXiv preprint arXiv:1612.04340. 2016 Dec 13. “
  5. Wang W, Tornatore M, Zhao Y, Chen H, Li Y, Gupta A, Zhang J, Mukherjee B. Infrastructure-efficient virtual-machine placement and workload assignment in cooperative edge-cloud computing over backhaul networks. IEEE Transactions on Cloud Computing. 2021 Aug 27;11(1):653-65.
  6. Zheng S. Several locality semigroups, path semigroups and partial semigroups. arXiv preprint arXiv:1808.10814. 2018 Aug 31.
  7. Luketina J, Nardelli N, Farquhar G, Foerster J, Andreas J, Grefenstette E, Whiteson S, Rocktäschel T. A survey of reinforcement learning informed by natural language. arXiv preprint arXiv:1906.03926. 2019 Jun 10.
  8. Kim JE. Strong CP problem, axions, and cosmological implications of CP violation. arXiv preprint arXiv:1703.03114. 2017 Mar 9.
  9. Zhou, F. Liu, H. Jin, B. Li, B. Li, and H. Jiang, “On arbitrating the power-performance tradeoff in SaaS clouds” in 2013 Proceedings IEEE INFOCOM, 2013, p. 872‑880.
  10. K. M. Raj and R. Shriram, “A study on server Sleep state transition to reduce power consumption in a virtualized server cluster environment” in 2012 Fourth International Conference on Communication Systems and Networks (COMSNETS 2012), 2012, p. 1‑6.
  11. C. Lin, P. Liu, and J. J. Wu, “Energy-efficient Virtual Machine Provision Algorithms for Cloud Systems” in 2011 Fourth IEEE International Conference on Utility and Cloud Computing, 2011, p. 81‑ 88.
  12. Sharifi, H. Salimi, and M. Najafzadeh, “Power-efficient distributed scheduling of virtual machines using workload-aware consolidation techniques”, J. Supercomput., vol. 61, no 1, p. 46‑66, juill. 2012.
  13. Murtazaev and S. Oh, “Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing”, IETE Tech. Rev., vol. 28, no 3, p. 212‑231, mai 2011.
  14. Quang-Hung, P. D. Nien, N. H. Nam, N. H. Tuong, and N. Thoai, “A Genetic Algorithm for Power-Aware Virtual Machine Allocation in Private Cloud” in Information and Communication Technology, Springer, Berlin, Heidelberg, 2013, p. 183‑191.

nn[/if 1104][if 1104 not_equal=””]n

    [foreach 1102]n t

  1. [if 1106 equals=””], [/if 1106][if 1106 not_equal=””],[/if 1106]
  2. n[/foreach]

n[/if 1104]

nn


nn[if 1114 equals=”Yes”]n

n[/if 1114]

n

n

[if 424 not_equal=””]Regular Issue[else]Published[/if 424] Subscription Original Research

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n[if 2146 equals=”Yes”]

[/if 2146][if 2146 not_equal=”Yes”]

[/if 2146]n

n

n

Volume 01
[if 424 equals=”Regular Issue”]Issue[/if 424][if 424 equals=”Special Issue”]Special Issue[/if 424] [if 424 equals=”Conference”][/if 424] 03
Received February 29, 2024
Accepted March 11, 2024
Published April 22, 2024

n

n

n

n

n

n

nn function myFunction2() {n var x = document.getElementById(“browsefigure”);n if (x.style.display === “block”) {n x.style.display = “none”;n }n else { x.style.display = “Block”; }n }n document.querySelector(“.prevBtn”).addEventListener(“click”, () => {n changeSlides(-1);n });n document.querySelector(“.nextBtn”).addEventListener(“click”, () => {n changeSlides(1);n });n var slideIndex = 1;n showSlides(slideIndex);n function changeSlides(n) {n showSlides((slideIndex += n));n }n function currentSlide(n) {n showSlides((slideIndex = n));n }n function showSlides(n) {n var i;n var slides = document.getElementsByClassName(“Slide”);n var dots = document.getElementsByClassName(“Navdot”);n if (n > slides.length) { slideIndex = 1; }n if (n (item.style.display = “none”));n Array.from(dots).forEach(n item => (item.className = item.className.replace(” selected”, “”))n );n slides[slideIndex – 1].style.display = “block”;n dots[slideIndex – 1].className += ” selected”;n }n”}]