An Automation Detection for Sign Language Using AI

[{“box”:0,”content”:”[if 992 equals=”Open Access”]

n

Open Access

n

[/if 992]n

n

Year : April 4, 2024 at 5:02 pm | [if 1553 equals=””] Volume :11 [else] Volume :11[/if 1553] | [if 424 equals=”Regular Issue”]Issue[/if 424][if 424 equals=”Special Issue”]Special Issue[/if 424] [if 424 equals=”Conference”][/if 424] : 01 | Page : –

n

n

n

n

n

n

By

n

    n t

    [foreach 286]n

    n

    Om Krishna Hankare, Ravikishor Bandla, Gangadri, Navaneeth Harivikas, Mahesh Sutar

  1. [/foreach]

    n

n

n[if 2099 not_equal=”Yes”]n

    [foreach 286] [if 1175 not_equal=””]n t

  1. Student, Student, Student, Student, Professor, Department of CSE (AI & ML), Vishwaniketan’s IMEET, Khalapur Mumbai University, Mumbai, Department of CSE (AI & ML), Vishwaniketan’s IMEET, Khalapur Mumbai University, Mumbai, Department of CSE (AI & ML), Vishwaniketan’s IMEET, Khalapur Mumbai University, Mumbai, Department of CSE (AI & ML), Vishwaniketan’s IMEET, Khalapur Mumbai University, Mumbai, Department of CSE (AI & ML), Vishwaniketan’s IMEET, Khalapur Mumbai University, Mumbai,, Maharashtra, Maharashtra, Maharashtra, Maharashtra, Maharashtra, India, India, India, India, India
  2. n[/if 1175][/foreach]

[/if 2099][if 2099 equals=”Yes”][/if 2099]nn

n

Abstract

nSign language recognition has attracted considerable interest because of its ability to facilitate communication between the deaf community and the public, thereby bridging communication divides. Traditional approaches to sign language recognition often face challenges in accurately interpreting the complex and nuanced gestures inherent in sign languages. However, recent advancements in deep learning techniques have shown promising results in improving the accuracy and robustness of sign language recognition systems. This paper presents an enhanced sign language recognition system utilizing state-of-the-art deep learning architectures. Convolutional neural networks (CNNs) are utilized to extract spatial characteristics from images of sign language, while recurrent neural networks (RNNs) capture the temporal relationships present in sequences of sign language. The proposed system is trained on large-scale sign language datasets to learn discriminative features and improve generalization performance. Furthermore, to address the challenges posed by variations in lighting conditions, backgrounds, and signer characteristics, data augmentation techniques are employed to enhance the robustness of the model. Additionally, transfer learning is explored to leverage pre-trained models on large-scale visual recognition tasks for improved performance on sign language recognition. Experimental results demonstrate that the proposed approach achieves state-of-the-art performance on benchmark sign language recognition datasets, surpassing previous methods in terms of accuracy and generalization. The system’s effectiveness is validated through extensive evaluation on diverse sign language datasets, showcasing its potential for real-world applications in facilitating communication for the hearing- impaired community.

n

n

n

Keywords: Sign language recognition, deep learning, convolutional neural networks, recurrent neural networks, data augmentation, transfer learning.

n[if 424 equals=”Regular Issue”][This article belongs to Recent Trends in Programming languages(rtpl)]

n

[/if 424][if 424 equals=”Special Issue”][This article belongs to Special Issue under section in Recent Trends in Programming languages(rtpl)][/if 424][if 424 equals=”Conference”]This article belongs to Conference [/if 424]

n

n

n

How to cite this article: Om Krishna Hankare, Ravikishor Bandla, Gangadri, Navaneeth Harivikas, Mahesh Sutar An Automation Detection for Sign Language Using AI rtpl April 4, 2024; 11:-

n

How to cite this URL: Om Krishna Hankare, Ravikishor Bandla, Gangadri, Navaneeth Harivikas, Mahesh Sutar An Automation Detection for Sign Language Using AI rtpl April 4, 2024 {cited April 4, 2024};11:-. Available from: https://journals.stmjournals.com/rtpl/article=April 4, 2024/view=0

n


n[if 992 equals=”Open Access”] Full Text PDF Download[else] nvar fieldValue = “[user_role]”;nif (fieldValue == ‘indexingbodies’) {n document.write(‘Full Text PDF‘);n }nelse if (fieldValue == ‘administrator’) { document.write(‘Full Text PDF‘); }nelse if (fieldValue == ‘rtpl’) { document.write(‘Full Text PDF‘); }n else { document.write(‘ ‘); }n [/if 992] [if 379 not_equal=””]n

Browse Figures

n

n

[foreach 379]n

n[/foreach]n

nn

n

n[/if 379]n

n

References

n[if 1104 equals=””]n

1. Oudah M, Al-Naji A, Chahl J. Hand gesture recognition based on computer vision: a review of techniques. journal of Imaging. 2020 Jul 23;6(8):73.
2. James TG, Varnes JR, Sullivan MK, Cheong J, Pearson TA, Yurasek AM, Miller MD, McKee MM. Conceptual model of emergency department utilization among deaf and hard-of-hearing patients: a critical review. International Journal of Environmental Research and Public Health. 2021 Dec 7;18(24):12901.
3. Ewe EL, Lee CP, Kwek LC, Lim KM. Hand gesture recognition via lightweight VGG16 and ensemble classifier. Applied Sciences. 2022 Jul 29;12(15):7643.
4. Foltz A, Cuffin H, Shank C. Deaf-Accessible Parenting Classes: Insights from Deaf Parents in North Wales. Societies. 2022 Jun 30;12(4):99.
5. Chong TW, Lee BG. American sign language recognition using leap motion controller with machine learning approach. Sensors. 2018 Oct 19;18(10):3554.
6. Vaitkevičius A, Taroza M, Blažauskas T, Damaševičius R, Maskeliūnas R, Woźniak M. Recognition of American sign language gestures in a virtual reality using leap motion. Applied Sciences. 2019 Jan 28;9(3):445.
7. Su R, Chen X, Cao S, Zhang X. Random forest-based recognition of isolated sign language subwords using data from accelerometers and surface electromyographic sensors. Sensors. 2016 Jan 14;16(1):100.
8. Amin MS, Rizvi ST, Hossain MM. A comparative review on applications of different sensors for sign language recognition. Journal of Imaging. 2022 Apr 2;8(4):98.
9. Lee BG, Chong TW, Chung WY. Sensor fusion of motion-based sign language interpretation with deep learning. Sensors. 2020 Nov 2;20(21):6256.
10. Yu H, Zheng D, Liu Y, Chen S, Wang X, Peng W. Low-Cost Self-Calibration Data Glove Based on Space-Division Multiplexed Flexible Optical Fiber Sensor. Polymers. 2022 Sep 20;14(19):3935.
11. Ahmed MA, Zaidan BB, Zaidan AA, Salih MM, Lakulu MM. A review on systems-based sensory gloves for sign language recognition state of the art between 2007 and 2017. Sensors. 2018 Jul 9;18(7):2208.
12. Mummadi CK, Philips Peter Leo F, Deep Verma K, Kasireddy S, Scholl PM, Kempfle J, Van Laerhoven K. Real-time and embedded detection of hand gestures with an IMU-based glove. InInformatics 2018 Jun 11 (Vol. 5, No. 2, p. 28). MDPI.
13. Bird JJ, Ekárt A, Faria DR. British sign language recognition via late fusion of computer vision and leap motion with transfer learning to american sign language. Sensors. 2020 Sep 9;20(18):5151.
14. Santos DG, Fernandes BJ, Bezerra BL. HAGR-D: a novel approach for gesture recognition with depth maps. Sensors. 2015 Nov 12;15(11):28646-64.
15. Xia K, Lu W, Fan H, Zhao Q. A sign language recognition system applied to deaf-mute medical consultation. Sensors. 2022 Nov 24;22(23):9107.
16. Zhu Y, Zhang J, Zhang Z, Clepper G, Jia J, Liu W. Designing an Interactive Communication Assistance System for Hearing-Impaired College Students Based on Gesture Recognition and Representation. Future Internet. 2022 Jun 29;14(7):198.
17. Pagliari D, Pinto L. Calibration of kinect for xbox one and comparison between the two generations of microsoft sensors. Sensors. 2015 Oct 30;15(11):27569-89.
18. Guzsvinecz T, Szucs V, Sik-Lanyi C. Suitability of the Kinect sensor and Leap Motion controller—A literature review. Sensors. 2019 Mar 2;19(5):1072.
19. Mujahid A, Awan MJ, Yasin A, Mohammed MA, Damaševičius R, Maskeliūnas R, Abdulkareem KH. Real-time hand gesture recognition based on deep learning YOLOv3 model. Applied Sciences. 2021 May 2;11(9):4164.
20. Tscholl DW, Rössler J, Said S, Kaserer A, Spahn DR, Nöthiger CB. Situation awareness-oriented patient monitoring with visual patient technology: a qualitative review of the primary research. Sensors. 2020 Apr 9;20(7):2112.

nn[/if 1104][if 1104 not_equal=””]n

    [foreach 1102]n t

  1. [if 1106 equals=””], [/if 1106][if 1106 not_equal=””],[/if 1106]
  2. n[/foreach]

n[/if 1104]

nn


nn[if 1114 equals=”Yes”]n

n[/if 1114]

n

n

[if 424 not_equal=””]Regular Issue[else]Published[/if 424] Open Access Review Article

n

n

n

n

n

Recent Trends in Programming languages

n

[if 344 not_equal=””]ISSN: 2455-1821[/if 344]

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n[if 2146 equals=”Yes”]

[/if 2146][if 2146 not_equal=”Yes”]

[/if 2146]n

n

n

Volume 11
[if 424 equals=”Regular Issue”]Issue[/if 424][if 424 equals=”Special Issue”]Special Issue[/if 424] [if 424 equals=”Conference”][/if 424] 01
Received March 14, 2024
Accepted March 20, 2024
Published April 4, 2024

n

n

n

n

n

n

nn function myFunction2() {n var x = document.getElementById(“browsefigure”);n if (x.style.display === “block”) {n x.style.display = “none”;n }n else { x.style.display = “Block”; }n }n document.querySelector(“.prevBtn”).addEventListener(“click”, () => {n changeSlides(-1);n });n document.querySelector(“.nextBtn”).addEventListener(“click”, () => {n changeSlides(1);n });n var slideIndex = 1;n showSlides(slideIndex);n function changeSlides(n) {n showSlides((slideIndex += n));n }n function currentSlide(n) {n showSlides((slideIndex = n));n }n function showSlides(n) {n var i;n var slides = document.getElementsByClassName(“Slide”);n var dots = document.getElementsByClassName(“Navdot”);n if (n > slides.length) { slideIndex = 1; }n if (n (item.style.display = “none”));n Array.from(dots).forEach(n item => (item.className = item.className.replace(” selected”, “”))n );n slides[slideIndex – 1].style.display = “block”;n dots[slideIndex – 1].className += ” selected”;n }n”}]