This is an unedited manuscript accepted for publication and provided as an Article in Press for early access at the author’s request. The article will undergo copyediting, typesetting, and galley proof review before final publication. Please be aware that errors may be identified during production that could affect the content. All legal disclaimers of the journal apply.
Sangeeta Singh,
- Assistant Professor, Department of Computer Science Engineering, Madhav University, Rajasthan, India
Abstract
Natural Language Processing is a key field of study within artificial intelligence that focuses on enabling machines to understand and work with human language. This is because there is much digital text data everywhere. Natural Language Processing is what this study is about. It looks at new ways of doing Natural Language Processing. The old ways are like machine learning and the new ways are like learning. This study compares how well different models work. Natural Language Processing is an important area of artificial intelligence that aims to help computers interpret, analyze, and interact with human language effectively. The study uses known datasets like IMDB reviews and Twitter sentiment data to see how well these models work. First the data has to be prepared. Then the important features have to be found. After that the model has to be trained. Finally the model has to be tested to see how well it works. The study uses measures like accuracy and precision to see how well the models work. The findings indicate that the proposed models outperform the previous ones, mainly because they are more effective at capturing the context and underlying meaning within the text. The old models are simple and work fast. They cannot handle complex text. To make these models work Object-Oriented Programming is very important. It helps to organize the system in an simple way. The code is organized using classes and objects as its fundamental building blocks. This makes it easy to manage parts of the system like data and models. This way of doing things makes the code easy to read and use again. It also makes it easy to add things to the system. The study also talks about the challenges of Natural Language Processing. These challenges are things like needing a lot of computer power and needing a lot of data. With these challenges Natural Language Processing is still a very powerful tool. It can be used in areas like healthcare and education. This study helps us understand how to use Natural Language Processing models and how to make them better, in the future. Natural Language Processing is still. It will be used in many new ways.
Keywords: NLP, OOP’s, ML/DL Algorithm, Transformers, BERT, Text Mining, Artificial Intelligence
Sangeeta Singh. A Comprehensive Study of Natural Language Processing Systems Using Modern Programming Languages: Techniques, Architectures, Experimental Evaluation, and Applications. Recent Trends in Programming languages. 2026; 13(01):-.
Sangeeta Singh. A Comprehensive Study of Natural Language Processing Systems Using Modern Programming Languages: Techniques, Architectures, Experimental Evaluation, and Applications. Recent Trends in Programming languages. 2026; 13(01):-. Available from: https://journals.stmjournals.com/rtpl/article=2026/view=242325
References
1. Bird S, Klein E, Loper E. Natural language processing with Python: analyzing text with the
natural language toolkit. ” O’Reilly Media, Inc.”; 2009 Jun 12.
2. Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P,
Sastry G, Askell A, Agarwal S. Language models are few-shot learners. Advances in neural
information processing systems. 2020;33:1877-901.
3. Cortes C, Vapnik V. Support-vector networks. Machine learning. 1995 Sep;20(3):273-97.
4. Devlin J, Chang MW, Lee K, Toutanova K. Bert: Pre-training of deep bidirectional
transformers for language understanding. InProceedings of the 2019 conference of the North
American chapter of the association for computational linguistics: human language
technologies, volume 1 (long and short papers) 2019 Jun (pp. 4171-4186).
5. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997 Nov
15;9(8):1735-80.
6. Elliott R, Glauert JR, Kennaway JR, Marshall I. The development of language processing
support for the ViSiCAST project. InProceedings of the fourth international ACM conference
on Assistive technologies 2000 Nov 13 (pp. 101-108).
7. Maas, A. L., Daly, R. E., Pham, P. T., Huang, D., Ng, A. Y., & Potts, C. (2011). Learning
word vectors for sentiment analysis. In *Proceedings of the 49th Annual Meeting of the
Association for Computational Linguistics (ACL)* (pp. 142–150).
8. Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in
vector space. arXiv preprint arXiv:1301.3781. 2013 Jan 16.
9. Pennington J, Socher R, Manning CD. Glove: Global vectors for word representation.
InProceedings of the 2014 conference on empirical methods in natural language processing
(EMNLP) 2014 Oct (pp. 1532-1543).
10. Salton G, Buckley C. Term-weighting approaches in automatic text retrieval. Information
processing & management. 1988 Jan 1;24(5):513-23.

Recent Trends in Programming languages
| Volume | 13 |
| 01 | |
| Received | 31/03/2026 |
| Accepted | 21/04/2026 |
| Published | 30/04/2026 |
| Publication Time | 30 Days |
Login
PlumX Metrics