This is an unedited manuscript accepted for publication and provided as an Article in Press for early access at the author’s request. The article will undergo copyediting, typesetting, and galley proof review before final publication. Please be aware that errors may be identified during production that could affect the content. All legal disclaimers of the journal apply.
Kinjal Doshi,
Falguni Parsana,
- Research Scholar, Department of Computer Science, Atmiya University Rajkot, Gujarat, India
- Assistant Professor, Department of Computer Science, Atmiya University, Rajkot, Gujarat, India
Abstract
The requirement for large, manually labeled datasets is one of the main barriers to applying sentiment analysis algorithms in specialized or rapidly evolving disciplines in the present Natural Language Processing (NLP) landscape. This work investigates a paradigm shift from traditional fully supervised learning to data-efficient methods, specifically Zero-Shot Learning (ZSL) and Few-Shot Learning (FSL). This study uses the advanced capabilities of instruction-tuned Large Language Models (LLMs), like GPT-4, to assess the ability to classify emotions in a variety of language settings with little to no task-specific training data. The project mixes benchmark datasets like SST-2 with specialty domain sets like hospital evaluations and financial tweets to evaluate performance stability and domain generalizability.
According to our experimental results, highly supervised models like BERT obtain great accuracy (93.2%), whereas the GPT-4 Few-Shot model achieves a competitive 92.8% accuracy utilizing just 1% of the labeled training data. This is a significant resource allocation optimization that reduces both the amount of human labor required for annotation and the computational overhead of fine-tuning the model. The study then does a comprehensive error analysis, indicating persistent challenges in language complexity, such as recognizing sarcasm and understanding domain-specific jargon. By addressing context ambiguity and data imbalance by purposeful prompt engineering and In-Context Learning (ICL), this work provides a comprehensive method for performing high-performance sentiment analysis in resource-constrained contexts.
The results show that the marginal usefulness of more labeled data rapidly drops at the few-shot barrier, making FSL the most practicable technical and cost-effective method for modern enterprise-level sentiment monitoring.
Keywords: Sentiment analysis, zero-shot learning, few-shot learning, transfer learning, NLP, artificial intelligence, Large Language Models.
Kinjal Doshi, Falguni Parsana. Understanding Sentiment Trends Through Zero-Shot and Few-Shot Learning Models. International Journal of Computer Science Languages. 2026; 04(01):-.
Kinjal Doshi, Falguni Parsana. Understanding Sentiment Trends Through Zero-Shot and Few-Shot Learning Models. International Journal of Computer Science Languages. 2026; 04(01):-. Available from: https://journals.stmjournals.com/ijcsl/article=2026/view=241116
References
- Pang B, Lee L, Vaithyanathan S. Thumbs up? Sentiment classification using machine learning techniques. arXiv preprint cs/0205070. 2002 May 28.
- ea Brown TB. Language models are few-shot learners. Advances in neural information processing systems. 2020;33:1877-901.
- Ramesh G, Sahil M, Palan SA, Bhandary D, Ashok TA, Shreyas J, Sowjanya N. A review on NLP zero-shot and few-shot learning: methods and applications. Discover Applied Sciences. 2025 Aug 21;7(9):966.
- Min S, Lyu X, Holtzman A, Artetxe M, Lewis M, Hajishirzi H, Zettlemoyer L. Rethinking the role of demonstrations: What makes in-context learning work?. arXiv preprint arXiv:2202.12837. 2022 Feb 25.
- Adusumalli S, Lee H, Hoi Q, Koo SL, Tan IB, Ng PC. Assessment of web-based consumer reviews as a resource for drug performance. Journal of medical Internet research. 2015 Aug 28;17(8):e211.
- Yu Y, Zhang D, Li S. Unified multi-modal pre-training for few-shot sentiment analysis with prompt-based learning. InProceedings of the 30th ACM international conference on multimedia 2022 Oct 10 (pp. 189-198).
- Niu C, Li C, Ng V, Luo B. Crosscodebench: Benchmarking cross-task generalization of source code models. In2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE) 2023 May 14 (pp. 537-549). IEEE.
- Song R, Li Y, Shi L, Giunchiglia F, Xu H. Shortcut learning in in-context learning: A survey. arXiv preprint arXiv:2411.02018. 2024 Nov 4.
- Yong G, Jeon K, Gil D, Lee G. Prompt engineering for zero‐shot and few‐shot defect detection and classification using a visual‐language pretrained model. Computer‐Aided Civil and Infrastructure Engineering. 2023 Jul;38(11):1536-54.
- Song Y, Wang T, Cai P, Mondal SK, Sahoo JP. A comprehensive survey of few-shot learning: Evolution, applications, challenges, and opportunities. ACM Computing Surveys. 2023 Jul 13;55(13s):1-40.
- Kamath U, Keenan K, Somers G, Sorenson S. Prompt-based Learning. InLarge Language Models: A Deep Dive: Bridging Theory and Practice 2024 Aug 21 (pp. 83-133). Cham: Springer Nature Switzerland.
- Zhang W, Deng Y, Liu B, Pan S, Bing L. Sentiment analysis in the era of large language models: A reality check. InFindings of the Association for Computational Linguistics: NAACL 2024 2024 Jun (pp. 3881-3906).
- Vamvakas D, Papaioannou I, Tsaknakis C, Sgouros T, Korkas C. Generative AI for Sustainable Smart Environments: A Review of Energy Systems, Buildings, and User-Centric Decision-Making. Energies. 2025 Nov 24;18(23):6163.
- Zhou Y, Xu A, Zhou Y, Singh J, Gui J, Joty S. Variation in verification: Understanding verification dynamics in large language models. arXiv preprint arXiv:2509.17995. 2025 Sep 22.
- Liu H, Tam D, Muqeeth M, Mohta J, Huang T, Bansal M, Raffel CA. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. Advances in Neural Information Processing Systems. 2022 Dec 6;35:1950-65.

International Journal of Computer Science Languages
| Volume | 04 |
| 01 | |
| Received | 22/12/2025 |
| Accepted | 09/01/2026 |
| Published | 27/04/2026 |
| Publication Time | 126 Days |
Login
PlumX Metrics