Sign Language to Speech Translation and Emergency Alert System for Dumb persons using Ml and IOT

[{“box”:0,”content”:”[if 992 equals=”Open Access”]n

n

n

n

Open Access

nn

n

n[/if 992]n

n

Year : August 14, 2024 at 4:16 pm | [if 1553 equals=””] Volume : [else] Volume :[/if 1553] | [if 424 equals=”Regular Issue”]Issue[/if 424][if 424 equals=”Special Issue”]Special Issue[/if 424] [if 424 equals=”Conference”][/if 424] : | Page : –

n

n

n

n

n

n

By

n

[foreach 286]n

vector

n

n

Abhirami J.S, Dharunya V, Subhadharuna R, Tirukkala Gnana Sindhuja,

n

    n t

  • n

n

n[/foreach]

n

n[if 2099 not_equal=”Yes”]n

    [foreach 286] [if 1175 not_equal=””]n t

  1. Assistant Professor,, UG Student,, UG Student,, UG Student, Nehru Institute of Engineering and Technology, Anna University,, Nehru Institute of Engineering and Technology, Anna University,, Nehru Institute of Engineering and Technology, Anna University,, Nehru Institute of Engineering and Technology, Anna University, Coimbatore,, Coimbatore,, Coimbatore,, Coimbatore, India, India, India, India
  2. n[/if 1175][/foreach]

n[/if 2099][if 2099 equals=”Yes”][/if 2099]n

n

Abstract

nThis project proposes a novel approach for gesture recognition using key point extraction and neural networks. Our proposed system leverages key point extraction techniques to capture fine-grained spatial information from input gestures. These key points are then fed into a neural network model, allowing for automatic feature learning and robust gesture classification. The goal of this project is to integrate OpenCV’s computer vision capabilities to build a flexible and effective home automation system. The system uses cameras to process images, which enables it to identify and react to different environmental cues in the house. Using a mobile or web application, a user-friendly interface makes it easier to remotely monitor and operate household appliances. A clever and flexible way to improve security, energy efficiency, and general convenience in a home is this home automation system, which combines the capabilities of OpenCV for image processing with Arduino for real-time control. The modular design of this project ensures scalability and relevance in the quickly developing field of smart home automation by allowing future extensions and integration with upcoming technologies.

n

n

n

Keywords: Gesture recognition, Virtual reality control system, robust gesture recognition, Sign language recognition, CNN

n[if 424 equals=”Regular Issue”][This article belongs to Journal of Microcontroller Engineering and Applications(jomea)]

n

[/if 424][if 424 equals=”Special Issue”][This article belongs to Special Issue under section in Journal of Microcontroller Engineering and Applications(jomea)][/if 424][if 424 equals=”Conference”]This article belongs to Conference [/if 424]

n

n

n

How to cite this article: Abhirami J.S, Dharunya V, Subhadharuna R, Tirukkala Gnana Sindhuja. Sign Language to Speech Translation and Emergency Alert System for Dumb persons using Ml and IOT. Journal of Microcontroller Engineering and Applications. August 14, 2024; ():-.

n

How to cite this URL: Abhirami J.S, Dharunya V, Subhadharuna R, Tirukkala Gnana Sindhuja. Sign Language to Speech Translation and Emergency Alert System for Dumb persons using Ml and IOT. Journal of Microcontroller Engineering and Applications. August 14, 2024; ():-. Available from: https://journals.stmjournals.com/jomea/article=August 14, 2024/view=0

nn[if 992 equals=”Open Access”] Full Text PDF Download[/if 992] n

n[if 992 not_equal=’Open Access’] [/if 992]n

nn

nn[if 379 not_equal=””]n

Browse Figures

n

n

[foreach 379]n

n[/foreach]n

n

n

n[/if 379]n

n

References

n[if 1104 equals=””]n

Al-Jarrah, Omar, and AlaaHalawani.(2021), “Recognition of gestures in Arabic sign language using neuro-fuzzy systems.” Artificial Intelligence 133.1-2: 117- 138 [2] Alom, MdZahangir, Tarek M. Taha, Christopher Yakopcic, Stefan Westberg, PahedingSidike, MstShamimaNasrin, Brian C. Van Esesn, Abdul A. S. Awwal, and Vijayan K. Asari. (2018),”The history began from alexnet: A comprehensive survey on deep learning approaches.” arXiv preprint arXiv:1803.01164 . [3] Al-Qizwini, Mohammed, ImanBarjasteh, Hothaifa Al-Qassab, and HayderRadha. (2017), “Deep learning algorithm for autonomous driving using googlenet.” In 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 89-96. IEEE. [4] Aujeszky, Tamás, and Mohamad Eid.(2016), “A gesture recogintion architecture for Arabic sign language communication system.” Multimedia Tools and 12 75.14: 8493-8511. [5] Ayshee, TanzilaFerdous, Sadia Afrin Raka, QuaziRidwanHasib, Md Hossain, and Rashedur M. Rahman.(2014). “Fuzzy rule-based hand gesture recognition for bengali characters.” In 2019 IEEE International Advance Computing Conference (IACC), pp. 484-489. IEEE. [6] Doe J, Thorson E, Smith J. Employing Bayesian Inference Models to Bolster The Robustness of Graph Neural Networks. [7] Ibrahim NB, Zayed HH, Selim MM. Advances, challenges and opportunities in continuous sign language recognition. Journal of Engineering and Applied Sciences. 2020;15(5):1205-27. [8] Karmel A, Sharma A, Garg D. IoT based assistive device for deaf, dumb and blind people. Procedia Computer Science. 2019 Jan 1;165:259-69. [9] Triwijoyo BK, Karnaen LY, Adil A. Deep learning approach for sign language recognition. JITEKI: Jurnal Ilmiah Teknik Elektro Komputer dan Informatika. 2023;9(1). [10] Cubo J, Nieto A, Pimentel E. A cloud-based Internet of Things platform for ambient assisted living. Sensors. 2014 Aug 4;14(8):14070-105.

nn[/if 1104][if 1104 not_equal=””]n

    [foreach 1102]n t

  1. [if 1106 equals=””], [/if 1106][if 1106 not_equal=””],[/if 1106]
  2. n[/foreach]

n[/if 1104]

nn


nn[if 1114 equals=”Yes”]n

n[/if 1114]

n

n

[if 424 not_equal=””][else]Ahead of Print[/if 424] Subscription Review Article

n

n

[if 2146 equals=”Yes”][/if 2146][if 2146 not_equal=”Yes”][/if 2146]n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n[if 1748 not_equal=””]

[else]

[/if 1748]n

n

n

Volume
[if 424 equals=”Regular Issue”]Issue[/if 424][if 424 equals=”Special Issue”]Special Issue[/if 424] [if 424 equals=”Conference”][/if 424]
Received May 7, 2024
Accepted May 18, 2024
Published August 14, 2024

n

n

n

n

n

n nfunction myFunction2() {nvar x = document.getElementById(“browsefigure”);nif (x.style.display === “block”) {nx.style.display = “none”;n}nelse { x.style.display = “Block”; }n}ndocument.querySelector(“.prevBtn”).addEventListener(“click”, () => {nchangeSlides(-1);n});ndocument.querySelector(“.nextBtn”).addEventListener(“click”, () => {nchangeSlides(1);n});nvar slideIndex = 1;nshowSlides(slideIndex);nfunction changeSlides(n) {nshowSlides((slideIndex += n));n}nfunction currentSlide(n) {nshowSlides((slideIndex = n));n}nfunction showSlides(n) {nvar i;nvar slides = document.getElementsByClassName(“Slide”);nvar dots = document.getElementsByClassName(“Navdot”);nif (n > slides.length) { slideIndex = 1; }nif (n (item.style.display = “none”));nArray.from(dots).forEach(nitem => (item.className = item.className.replace(” selected”, “”))n);nslides[slideIndex – 1].style.display = “block”;ndots[slideIndex – 1].className += ” selected”;n}n”}]