An Automation Detection for Sign Language Using AI

Year : 2024 | Volume :11 | Issue : 01 | Page : 1-14
By

    Om Krishna Hankare

  1. Ravikishor Bandla

  2. Gangadri

  3. Navaneeth Harivikas

  4. Mahesh Sutar

  1. Student, Department of Computer Science and Engineering (Artificial Intelligence & Machine Learning), Vishwaniketan’s Institute of Management Entrepreneurship and Engineering Technology, Khalapur Mumbai University, Mumbai, Maharashtra, India
  2. Student, Department of Computer Science and Engineering (Artificial Intelligence & Machine Learning), Vishwaniketan’s Institute of Management Entrepreneurship and Engineering Technology, Khalapur Mumbai University, Mumbai, Maharashtra, India
  3. Student, Department of Computer Science and Engineering (Artificial Intelligence & Machine Learning), Vishwaniketan’s Institute of Management Entrepreneurship and Engineering Technology, Khalapur Mumbai University, Mumbai, Maharashtra, India
  4. Student, Department of Computer Science and Engineering (Artificial Intelligence & Machine Learning), Vishwaniketan’s Institute of Management Entrepreneurship and Engineering Technology, Khalapur Mumbai University, Mumbai, Maharashtra, India
  5. Professor, Department of Computer Science and Engineering (Artificial Intelligence & Machine Learning), Vishwaniketan’s Institute of Management Entrepreneurship and Engineering Technology, Khalapur Mumbai University, Mumbai, Maharashtra, India

Abstract

Sign language recognition has attracted considerable interest because of its ability to facilitate communication between the deaf community and the public, thereby bridging communication divides. Traditional approaches to sign language recognition often face challenges in accurately interpreting the complex and nuanced gestures inherent in sign languages. However, recent advancements in deep learning techniques have shown promising results in improving the accuracy and robustness of sign language recognition systems. This study presents an enhanced sign language recognition system utilizing state-of-the-art deep learning architectures. Convolutional neural networks (CNNs) are utilized to extract spatial characteristics from images of sign language, while recurrent neural networks (RNNs) capture the temporal relationships present in sequences of sign language. The proposed system is trained on large-scale sign language datasets to learn discriminative features and improve generalization performance. Furthermore, to address the challenges posed by variations in lighting conditions, backgrounds, and signer characteristics, data augmentation techniques are employed to enhance the robustness of the model. Additionally, transfer learning is explored to leverage pre-trained models on large-scale visual recognition tasks for improved performance on sign language recognition. Experimental results demonstrate that the proposed approach achieves state-of-the-art performance on benchmark sign language recognition datasets, surpassing previous methods in terms of accuracy and generalization. The system’s effectiveness is validated through extensive evaluation on diverse sign language datasets, showcasing its potential for real-world applications in facilitating communication for the hearing-impaired community.

Keywords: Sign language recognition, deep learning, convolutional neural networks, recurrent neural networks, data augmentation, transfer learning

[This article belongs to Recent Trends in Programming languages(rtpl)]

How to cite this article: Om Krishna Hankare, Ravikishor Bandla, Gangadri, Navaneeth Harivikas, Mahesh Sutar.An Automation Detection for Sign Language Using AI.Recent Trends in Programming languages.2024; 11(01):1-14.
How to cite this URL: Om Krishna Hankare, Ravikishor Bandla, Gangadri, Navaneeth Harivikas, Mahesh Sutar , An Automation Detection for Sign Language Using AI rtpl 2024 {cited 2024 Apr 04};11:1-14. Available from: https://journals.stmjournals.com/rtpl/article=2024/view=138957


References

  1. Oudah M, Al-Naji A, Chahl J. Hand gesture recognition based on computer vision: a review of techniques. J Imaging. 2020 Jul 23; 6(8): 73.
  2. James TG, Varnes JR, Sullivan MK, Cheong J, Pearson TA, Yurasek AM, Miller MD, McKee MM. Conceptual model of emergency department utilization among deaf and hard-of-hearing patients: a critical review. Int J Environ Res Public Health. 2021 Dec 7; 18(24): 12901.
  3. Ewe EL, Lee CP, Kwek LC, Lim KM. Hand gesture recognition via lightweight VGG16 and ensemble classifier. Appl Sci. 2022 Jul 29; 12(15): 7643.
  4. Foltz A, Cuffin H, Shank C. Deaf-Accessible Parenting Classes: Insights from Deaf Parents in North Wales. Societies. 2022 Jun 30; 12(4): 99.
  5. Chong TW, Lee BG. American sign language recognition using leap motion controller with machine learning approach. Sensors. 2018 Oct 19; 18(10): 3554.
  6. Vaitkevičius A, Taroza M, Blažauskas T, Damaševičius R, Maskeliūnas R, Woźniak M. Recognition of American sign language gestures in a virtual reality using leap motion. Appl Sci. 2019 Jan 28; 9(3): 445.
  7. Su R, Chen X, Cao S, Zhang X. Random forest-based recognition of isolated sign language subwords using data from accelerometers and surface electromyographic sensors. Sensors. 2016 Jan 14; 16(1): 100.
  8. Amin MS, Rizvi ST, Hossain MM. A comparative review on applications of different sensors for sign language recognition. J Imaging. 2022 Apr 2; 8(4): 98.
  9. Lee BG, Chong TW, Chung WY. Sensor fusion of motion-based sign language interpretation with deep learning. Sensors. 2020 Nov 2; 20(21): 6256.
  10. Yu H, Zheng D, Liu Y, Chen S, Wang X, Peng W. Low-Cost Self-Calibration Data Glove Based on Space-Division Multiplexed Flexible Optical Fiber Sensor. Polymers. 2022 Sep 20; 14(19): 3935.
  11. Ahmed MA, Zaidan BB, Zaidan AA, Salih MM, Lakulu MM. A review on systems-based sensory gloves for sign language recognition state of the art between 2007 and 2017. Sensors. 2018 Jul 9; 18(7): 2208.
  12. Mummadi CK, Philips Peter Leo F, Deep Verma K, Kasireddy S, Scholl PM, Kempfle J, Van Laerhoven K. Real-time and embedded detection of hand gestures with an IMU-based glove. Informatics. 2018 Jun 11; 5(2): 28.
  13. Bird JJ, Ekárt A, Faria DR. British sign language recognition via late fusion of computer vision and leap motion with transfer learning to American sign language. Sensors. 2020 Sep 9; 20(18): 5151.
  14. Santos DG, Fernandes BJ, Bezerra BL. HAGR-D: a novel approach for gesture recognition with depth maps. Sensors. 2015 Nov 12; 15(11): 28646–64.
  15. Xia K, Lu W, Fan H, Zhao Q. A sign language recognition system applied to deaf-mute medical consultation. Sensors. 2022 Nov 24; 22(23): 9107.
  16. Zhu Y, Zhang J, Zhang Z, Clepper G, Jia J, Liu W. Designing an Interactive Communication Assistance System for Hearing-Impaired College Students Based on Gesture Recognition and Representation. Future Internet. 2022 Jun 29; 14(7): 198.
  17. Pagliari D, Pinto L. Calibration of kinect for xbox one and comparison between the two generations of Microsoft sensors. Sensors. 2015 Oct 30; 15(11): 27569–89.
  18. Guzsvinecz T, Szucs V, Sik-Lanyi C. Suitability of the Kinect sensor and Leap Motion controller—A literature review. Sensors. 2019 Mar 2; 19(5): 1072.
  19. Mujahid A, Awan MJ, Yasin A, Mohammed MA, Damaševičius R, Maskeliūnas R, Abdulkareem KH. Real-time hand gesture recognition based on deep learning YOLOv3 model. Appl Sci. 2021 May 2; 11(9): 4164.
  20. Tscholl DW, Rössler J, Said S, Kaserer A, Spahn DR, Nöthiger CB. Situation awareness-oriented patient monitoring with visual patient technology: a qualitative review of the primary research. Sensors. 2020 Apr 9; 20(7): 2112.

Regular Issue Subscription Review Article
Volume 11
Issue 01
Received March 14, 2024
Accepted March 20, 2024
Published April 4, 2024