Empowering Communication: A Review of Sign Language Translation Systems Powered by Machine Learning

Year : 2024 | Volume :11 | Issue : 01 | Page : 25-31
By

Rohan Kubde

Ayusha Malwe

Sharvari Mate

Simran Pardeshi

Sushma Nikumbh

  1. Lecturer Department of Electronics and Telecommunication, Sinhgad Institute of Technology and Science (SITS), Narhe, Pune Maharashtra India
  2. Student Department of Electronics and Telecommunication, Sinhgad Institute of Technology and Science (SITS), Narhe, Pune Maharashtra India
  3. Student Department of Electronics and Telecommunication, Sinhgad Institute of Technology and Science (SITS), Narhe, Pune Maharashtra India
  4. Student Department of Electronics and Telecommunication, Sinhgad Institute of Technology and Science (SITS), Narhe, Pune Maharashtra India
  5. Student Department of Electronics and Telecommunication, Sinhgad Institute of Technology and Science (SITS), Narhe, Pune Maharashtra India

Abstract

This research study offers a fresh solution to the communication gap between the hearing population and the deaf and hard-of-hearing community: the creation of a machine learning-based sign language translator. By utilizing cutting-edge K Nearest Neighbour (K-NN), the system effectively converts sign language motions into text and vice versa, facilitating smooth communication between sign language users and well-read people. The basis of the project is thorough data collection and careful preparation, which produces a strong and varied dataset that can efficiently handle different variances in sign language. The study’s conclusions show how accurately the system can translate sign language gestures, outperforming earlier standards. In addition, the model has the ability to translate text in real time, which increases accessibility and inclusivity for people with hearing loss. For millions of people worldwide, sign language is an essential means of communication, but there is still a divide between signers and non-signers. The use of machine learning (ML) in sign language translation systems has shown promise in closing this gap. An extensive overview of current developments in machine learning-powered sign language translation systems is provided in this article. We explore several machine learning approaches, obstacles, and potential paths forward in this field, emphasising how it might improve communication for the community of people who are deaf or hard of hearing.

Keywords: Communication gap, K Nearest Neighbour (K-NN), Data collection, Sign Languages and Gestures, Translation

[This article belongs to Recent Trends in Electronics Communication Systems(rtecs)]

How to cite this article: Rohan Kubde, Ayusha Malwe, Sharvari Mate, Simran Pardeshi, Sushma Nikumbh. Empowering Communication: A Review of Sign Language Translation Systems Powered by Machine Learning. Recent Trends in Electronics Communication Systems. 2024; 11(01):25-31.
How to cite this URL: Rohan Kubde, Ayusha Malwe, Sharvari Mate, Simran Pardeshi, Sushma Nikumbh. Empowering Communication: A Review of Sign Language Translation Systems Powered by Machine Learning. Recent Trends in Electronics Communication Systems. 2024; 11(01):25-31. Available from: https://journals.stmjournals.com/rtecs/article=2024/view=150587

Browse Figures

References

  1. Simei G. Wysoski, Marcus V. Lamar, Susumu Kuroy anagi, Akira Iwata, (2002). “A Rotation Invariant Approach On Static-Gesture Recognition Using Boundary Histograms And Neural International Journal of Arti ficial Intelligence Applications (IJAIA), Vol.3, No.4, July 2012.
  2. Bauer,H. Hienz “Relevant features for video-based continuous sign language recognition”, IEEE International Conference on Automatic Face and Gesture Recognition, 2002.
  3. Stergiopoulou, N. Papamarkos. (2009). “Hand gesture recognition using a neural network shape fitting technique,” Elsevier Engineering Applications of Artificial Intelligence, vol. 22(8), pp. 1141– 1158, doi: 1016/j.engappai.2009.03.008.
  4. S. Kulkarni, S.D. Lokhande, (2010) “Appearance Based Recognition of American Sign Language Using Gesture Segmentation”, International Journal on Computer Science and Engineering (IJCSE), Vol. 2(3), pp. 560-565.
  5. Mokhtar M. Hasan, Pramoud K. Misra, (2011). “Brightness Factor Matching For Gesture Recognition System Using Scaled Normalization”, International Journal of Computer Science Information Technology (IJCSIT), Vol. 3(2).
  6. Pigou, L., Dieleman, S., Kindermans, P.-J., Schrauwen, B. (2014). Sign Language Recognition Using Convolutional Neural Networks.
  7. Kumud Tripathi, Neha Baranwal and G. C. Nandi, “Continuous Indian Sign Language Gesture Recog nition and Sentence Formation”, Eleventh International Multi Conference on Information Processing 2015 (IMCIP-2015), Procedia Computer Science 54 (2015) 523–531.
  8. Noor Tubaiz, Tamer Shanableh, and Khaled Assaleh, “Glove-Based Continuous Arabic Sign Language Recognition in User-Dependent Mode,” IEEE Transactions on Human-Machine Systems, Vol. 45, NO. 4, August 2015.
  9. Geethu G Nath and Arun C.S, “Real Time Sign Language Interpreter,” 2017 International Conference on Electrical, Instrumentation, and Communication Engineering (ICEICE2017).
  10. Jing-Hao Sun, Ting-Ting Ji, Shu-Bin Zhang, Jia-Kui Yang, Guang-Rong Ji “Research on the Hand Gesture Recognition Based on Deep Learning”,07 February 2019.

Regular Issue Subscription Review Article
Volume 11
Issue 01
Received May 3, 2024
Accepted May 29, 2024
Published June 5, 2024