Eyes for Machines: A Computer Vision Approach to Enhance Robotic Arm Dexterity and Autonomy

Year : 2024 | Volume : 02 | Issue : 02 | Page : 28 33
    By

    Balkrishna Rasiklal Yadav,

  1. Independent Researcher, Institute of Electrical and Electronics Engineers, New Jersey, United States

Abstract

By combining sophisticated robotics and visual awareness, computer vision operated robotic arms have revolutionized technology. These devices are having a profound effect on several industries, from manufacturing processes to healthcare. Computer vision–controlled robotic arms are altering the game with their ability to see, comprehend, and interact with their surroundings. In this study we have tried to develop and implement software and hardware to improve the freedom of movement in a robotic arm which will have a similar look to hand gestures. In the modern world, robotic arms are becoming essential, with uses in a wide range of sectors, including military, defense, healthcare, and industrial automation. These amazing devices can replicate the movements and hand gestures of a human, which makes them incredibly useful in a variety of scenarios. The most innovative automation technology available now is robotic systems. Initially, robots were employed on manufacturing floors in the 1960s and early 1970s. Evaluating the precision of system was done by methods such as sorting, kinematic modeling, and center recognition which elevate the movements of robotic arms. More innovation to elevate the free movement of robotic arms can enhance its applications at defense and industrial level.

Keywords: Robotics, defense, kinematic modeling, hand gestures, point to point trajectory

[This article belongs to International Journal of Advanced Control and System Engineering ]

How to cite this article:
Balkrishna Rasiklal Yadav. Eyes for Machines: A Computer Vision Approach to Enhance Robotic Arm Dexterity and Autonomy. International Journal of Advanced Control and System Engineering. 2024; 02(02):28-33.
How to cite this URL:
Balkrishna Rasiklal Yadav. Eyes for Machines: A Computer Vision Approach to Enhance Robotic Arm Dexterity and Autonomy. International Journal of Advanced Control and System Engineering. 2024; 02(02):28-33. Available from: https://journals.stmjournals.com/ijacse/article=2024/view=183102


References

1. Cabré TP, Cairol MT, Calafell DF, Ribes MT, Roca JP. Project-based learning example: controlling an educational robotic arm with computer vision. IEEE Revista Iberoamericana de Tecnologias del Aprendizaje. 2013; 8 (3): 135–142. doi: 10.1109/RITA.2013.2273114.
2. Gomathy CK, Niteesh G, Sai Krishna K. The gesture controlled the robot. Int Res J Eng Technol. 2021; 8 (4): 1721–1725.
3. Dheeban SS, Harish DV, Harivignesh A, Prasanna M, Senthil Kumar N. Gesture controlled robotic arm. Int J Sci Technol. 2016; 4 (3): 101–112.
4. Patil C, Sharma S, Singh S. Design and implementation of gesture controlled robot with a robotic arm. Int Res J Eng Technol. 2019; 6 (9): 1351–1356.
5. Rai N, Rai B, Rai P. Computer vision approach for controlling educational robotic arm based on object properties. In: Proceedings of the 2014 2nd International Conference on Emerging Technology Trends in Electronics, Communication and Networking, Surat, India. December 26–27, 2014. pp. 1–9.
6. Hsu YH, Hsu H-Y, Lin J-S. Control design and implementation of intelligent vehicles with robot arm and computer vision. In: Proceedings of the 2015 International Conference on Advanced Robotics and Intelligent Systems (ARIS), Taipei, Taiwan, May 29–31, 2015. pp. 1–6.
7. Chen X, Huang X, Wang Y, Gao X. Combination of augmented reality-based brain-computer interface and computer vision for high-level control of a robotic arm. IEEE Trans Neural Syst Rehabil Eng. 2020; 28: 3140–3147. doi: 10.1109/TNSRE.2020.3038209.
8. Yu F, Xian W, Chen Y, Liu F, Liao M, Madhavan V, Darrell T. BDD100K: a diverse driving video database with scalable annotation tooling. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, June 13–19, 2020. pp. 2633–2642.
9. Pinto L, Gupta A. Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, May 16–21, 2016. pp. 3406–3413. doi: 10.1109/ICRA.2016.7487517.
10. Luo W, Sun P, Zhong F, Liu W, Zhang T, Wang Y. End-to-end active object tracking via reinforcement learning. Proc Mach Learn Res. 2018; 80: 3286–3295.
11. Iovin J. PIC Microcontroller Project. New York: McGraw-Hill; 2000.
12. Levine S, Pastor P, Krizhevsky A, Ibarz J, Quillen D. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int J Robotics Res. 2018; 37 (4–5): 421–436.
13. Geiger A, Lenz P, Stiller C, Urtasun R. Vision meets robotics: The kitti dataset. Int J Robotics Res. 2013; 32 (11): 1231–1237.


Regular Issue Subscription Original Research
Volume 02
Issue 02
Received 21/10/2024
Accepted 24/10/2024
Published 29/10/2024


Login


My IP

PlumX Metrics