This is an unedited manuscript accepted for publication and provided as an Article in Press for early access at the author’s request. The article will undergo copyediting, typesetting, and galley proof review before final publication. Please be aware that errors may be identified during production that could affect the content. All legal disclaimers of the journal apply.
S. Saraswat,
Aditya Gulhane,
Sujal Sahu,
Jyotiraditya More,
Shraddha Wakale,
- Professor, PDEA’S College of Engineering, Manjari (BK), Pune, Maharashtra, India
- Student, PDEA’S College of Engineering, Manjari (BK), Pune, Maharashtra, India
- Student, PDEA’S College of Engineering, Manjari (BK), Pune, Maharashtra, India
- Student, PDEA’S College of Engineering, Manjari (BK), Pune, Maharashtra, India
- Student, PDEA’S College of Engineering, Manjari (BK), Pune, Maharashtra, India
Abstract
Sign language serves as a vital communication method for individuals who are deaf or have hearing impairments. It relies on hand gestures, facial cues, and body language to convey messages and express meaning. However, many people who don’t know sign language find it hard to communicate with sign language users. The latest progress in artificial intelligence and computing has enabled the creation of systems that can automatically recognize sign language. These systems can help recognize hand gestures in real-time, making it easier for individuals with hearing difficulties to communicate. Traditionally, recognizing gestures required special equipment like sensors or cameras, which could be expensive and not widely available. Now, using deep learning and computer vision techniques, we can recognize gestures directly from regular webcam feeds, which is more affordable and accessible. In our project, we created a real-time hand gesture recognition system that uses a webcam to capture hand signs and processes them with a deep learning model. It then converts these gestures into text and voice outputs. This system aims to help people with hearing and speech challenges by providing a simple communication tool. Our paper explains how the system works, how we built it, and how well it performs. The findings indicate that deep learning is capable of accurately identifying hand gestures, thereby enhancing communication for individuals who are deaf or mute.
Keywords: Hand gesture recognition, sign language translation, deep learning, computer vision, CNN
[This article belongs to Journal of Image Processing & Pattern Recognition Progress ]
S. Saraswat, Aditya Gulhane, Sujal Sahu, Jyotiraditya More, Shraddha Wakale. Sign Talk for Blind and Deaf Translating Hand Gestures Into Audible and Textual Communication. Journal of Image Processing & Pattern Recognition Progress. 2025; 12(02):-.
S. Saraswat, Aditya Gulhane, Sujal Sahu, Jyotiraditya More, Shraddha Wakale. Sign Talk for Blind and Deaf Translating Hand Gestures Into Audible and Textual Communication. Journal of Image Processing & Pattern Recognition Progress. 2025; 12(02):-. Available from: https://journals.stmjournals.com/joipprp/article=2025/view=0
References
- Kothadiya DR, Bhatt CM, Saba T, Rehman A, Bahaj SA. Signformer: deep vision transformer for sign language recognition. IEEE Access. 2023 Jan 9;11:4730–9.
- Surekha P, Vitta N, Duggirala P, Ambadipudi VS. Hand Gesture Recognition and voice, text conversion using. In2022 Second International Conference on Artificial Intelligence and Smart Energy (ICAIS) 2022 Feb 23 (pp. 167–171). IEEE.
- Al Abdullah B, Amoudi G, Alghamdi H. Advancements in Sign Language Recognition: A Comprehensive Review and Future Prospects. IEEE Access. 2024 Sep 10.
- Jeevanandham P, Hariharan A, Keerthana G. Real-Time Hand Sign Language Translation: Text and Speech Conversion. In2024 7th International Conference on Circuit Power and Computing Technologies (ICCPCT) 2024 Aug 8 (Vol. 1, pp. 488–493). IEEE.
- Fareed AI, Ramanathan M, Yeswanth R, Devi S. Translation Tool for Alternative Communicators using Natural Language Processing. In2024 5th International Conference on Electronics and Sustainable Communication Systems (ICESC) 2024 Aug 7 (pp. 842–848). IEEE.
- Katoch S, Singh V, Tiwary US. Indian Sign Language recognition system using SURF with SVM and CNN. Array. 2022 Jul 1;14:100141.
- Rathi P, Kuwar Gupta R, Agarwal S, Shukla A. Sign language recognition using resnet50 deep neural network architecture. In5th International Conference on Next Generation Computing Technologies (NGCT-2019) 2020 Feb 27.
- Puente A, Alvarado JM, Herrera V. Fingerspelling and sign language as alternative codes for reading and writing words for Chilean deaf signers. American Annals of the deaf. 2006;151(3):299–310.
- Chansri C, Srinonchat J. Hand gesture recognition for Thai sign language in complex background using fusion of depth and color video. Procedia Computer Science. 2016 Jan 1;86:257–60.
- Varsha M, Nair CS. Indian sign language gesture recognition using deep convolutional neural network. In2021 8th International Conference on Smart Computing and Communications (ICSCC) 2021 Jul 1 (pp. 193–197). IEEE.

Journal of Image Processing & Pattern Recognition Progress
| Volume | 12 |
| Issue | 02 |
| Received | 25/03/2025 |
| Accepted | 16/04/2025 |
| Published | 28/04/2025 |
| Publication Time | 34 Days |
[first_name] [last_name]