Saharsh Harshad Wakhale,
- Student, Department of Computer Science, Narayana E-Techno School Uttrahalli, Bengaluru, Karnataka, India
Abstract
The advancement of large language models (LLMs) has initiated a significant transformation in artificial intelligence, with substantial effects on fields including natural language processing, machine learning, and human-computer interaction. This research examines the diverse improvements in LLMs, emphasizing significant milestones from early models such as GPT-2 to contemporary state-of-the-art designs. The investigation highlights the novel training methodologies, such as unsupervised learning and transfer learning, which have markedly improved the capabilities of LLMs in language understanding and production. Notwithstanding their exceptional powers, LLMs possess considerable drawbacks. Ethical issues regarding biases inherent in training data result in the perpetuation of detrimental stereotypes, prompting essential enquiries about accountability and equity in AI systems. The resource-intensive aspect of training and deploying LLMs presents sustainability concerns, as their environmental impact can be significant because of the energy demand involved. These issues necessitate a sophisticated comprehension of the trade-offs inherent in LLM development. The article advocates responsible AI methods, emphasizing the application of bias mitigation measures and the promotion of transparency in model development. Proposed future research directions aim to further examine the ethical implications and improve the robustness of LLMs, seeking a balance between technical progress and public welfare. By promoting interdisciplinary conversation, we may more effectively traverse the complexity of LLM evolution and leverage its potential while avoiding inherent risks.
Keywords: Large language models, natural language processing, machine learning, artificial intelligence, ChatGPT
[This article belongs to Journal of Web Engineering & Technology ]
Saharsh Harshad Wakhale. LLM Evolution: Secrets and Disadvantages. Journal of Web Engineering & Technology. 2025; 12(01):25-36.
Saharsh Harshad Wakhale. LLM Evolution: Secrets and Disadvantages. Journal of Web Engineering & Technology. 2025; 12(01):25-36. Available from: https://journals.stmjournals.com/jowet/article=2025/view=200997
References
- Vaswani A. Attention is all you need. Adv Neural Inf Process Syst. 2017;30:5998–6008.
- Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. [Preprint]. arXiv Prepr. 2018. DOI: 10.48550/arXiv.1810.04805.
- Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, et al. Language models are few-shot learners. Adv Neural Inf Process Syst. 2020;33:1877–901.
- Howard J, Ruder S. Universal language model fine-tuning for text classification. [Preprint]. arXiv:1801.06146. 2018. DOI: 10.48550/arXiv.1801.06146. 2018.
- Kaplan J, McCandlish S, Henighan T, Brown TB, Chess B, Child R, Gray S, Radford A, Wu J, Amodei D. Scaling laws for neural language models. [Preprint]. arXiv:2001.08361. 2020. DOI: 10.48550/arXiv.2001.08361.
- Reynolds L, McDonell K. Prompt programming for large language models: Beyond the few-shot paradigm. Ext Abstr CHI Conf Hum Factors Comput Syst. 2021;1–7. DOI: 10.1145/3411763.
- Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A. A survey on bias and fairness in machine learning. ACM Comput Surv. 2022;54:1–35. DOI: 10.1145/3457607
- Kojima T, Gu SS, Reid M, Matsuo Y, Iwasawa Y. Large language models are zero-shot reasoners. Adv Neural Inf Process Syst. 2022;35:22199–213.
- Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Adv Neural Inf Process Syst. 2022;35:24824–37.
- Mialon G, Dessì R, Lomeli M, Nalmpantis C, Pasunuru R, Raileanu R, Rozière B, Schick T, Dwivedi-Yu J, Celikyilmaz A, Grave E, LeCun Y, Scialom T. Augmented language models: a survey. [Preprint]. arXiv:2302.07842. DOI: 10.48550/arXiv.2302.07842. 2023.
- Barocas S, Hardt M, Narayanan A. Fairness and Machine Learning: Limitations and Opportunities. Cambridge, Massachusetts, United States: MIT Press; 2023.
- Gebru T, Morgenstern J, Vecchione B, Vaughan JW, Wallach H, Iii HD, et al. Datasheets for datasets. Commun ACM. 2021;64:86–92. DOI: 10.1145/3458723.
- Brundage M, Avin S, Wang J, Belfield H, Krueger G, Hadfield G, et al. Toward trustworthy AI development: mechanisms for supporting verifiable claims. [Preprint]. arXiv:2004.07213. 2020. DOI: 10.48550/arXiv.2004.07213. 2020.
- Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, Spitzer E, Raji ID, Gebru T. Model cards for model reporting. In: Proceedings of the Conference on Fairness, Accountability, and Transparency; 2019 Jan 29–31; Atlanta, GA, USA. New York, NY: Association for Computing Machinery; 2019. p. 220–9. DOI: 10.1145/3287560.3287596.
- Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, et al. HuggingFace’s Transformers: state-of-the-art natural language processing. arXiv [Preprint]. arXiv:1910.03771. 2020. DOI: https://doi.org/10.48550/arXiv.1910.03771.
- Holmes W. Artificial intelligence in education: Promises and implications for teaching and learning. Boston: Center for Curriculum Redesign; 2019.
- Seldon A, Abidoye O. The Fourth Education Revolution. London: Legend Press Ltd; 2018.
- Woolf BP. Building Intelligent Interactive Tutors: Student-Centered Strategies for Revolutionizing E-Learning. Burlington, Massachusetts, United States: Morgan Kaufmann Publishers; 2010.
- Floridi L, Cowls J. A unified framework of five principles for AI in society. Harv Data Sci Rev. 2019 Jul 1;1(1). DOI: 10.1162/99608f92.8cd550d1. Available from: https://hdsr.mitpress.mit.edu/pub/l0jsh9d1
- Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. [Preprint]. arXiv:1412.6572. 2014. DOI: 10.48550/arXiv.1412.6572.
- Christiano PF, Leike J, Brown T, Martic M, Legg S, Amodei D. Deep reinforcement learning from human preferences. Adv Neural Inf Process Syst. 2017;30:4299–310.
- Topol EJ. High-performance medicine: The convergence of human and artificial intelligence. Nat Med. 2019;25:44–56. DOI: 10.1038/s41591-018-0300-7. PubMed: 30617339.
- Patel BN, Rosenberg L, Willcox G, Baltaxe D, Lyons M, Irvin J, et al. Human–machine partnership with artificial intelligence for chest radiograph diagnosis. npj Digit Med. 2019;2:111. DOI: 10.1038/s41746-019-0189-7. PubMed: 31754637.
- Schneider G. Automating drug discovery. Nat Rev Drug Discov. 2018;17:97–113. DOI: 10.1038/ 2017.232. PubMed: 29242609.
- Whittlestone J, Nyrup R, Alexandrova A, Cave S. The role and limits of principles in AI ethics: towards a focus on tensions. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society; 2019 Jan 27–29; Honolulu, HI, USA. New York, NY: Association for Computing Machinery; 2019. p. 195–200. DOI: 10.1145/3306618.3314289.
- Holstein K, McLaren BM, Aleven V. Co-designing a real-time classroom orchestration tool to support teacher-AI complementarity. J Learn Anal. 2019;6. DOI: 10.18608/jla.2019.62.3.
- McKenzie L. (2024). AI in Education: Emerging Trends and Critical Challenges – ISC Research. [online] ISC Research. Available from: https://iscresearch.com/ai-in-education-emerging-trends-and-critical-challenges/.
- Luckin R, Holmes W, Griffiths M, Forcier LB. Intelligence Unleashed: An Argument for AI in Education. London: Pearson Education; 2016.
- Williamson B, Eynon R. Historical threads, missing links, and future directions in AI in education. Learn Media Technol. 2020;45:223–235. DOI: 10.1080/17439884.2020.1798995.
- Cho M, Yun H, Ko E. Contactless marketing management of fashion brands in the digital age. Eur Manag J. 2023;41:512–520. DOI: 10.1016/j.emj.2022.12.005.
- Chaturvedi R, Verma S. Opportunities and challenges of AI-driven customer service. Artif Intell Customer Serv. 2023;33–71.
- Pang B, Lee L. Opinion mining and sentiment analysis. Found Trends® Inf Retr. 2008;2:1–135. DOI: 10.1561/1500000011.
- Følstad A, Skjuve M. Chatbots for customer service: user experience and motivation. In: Proceedings of the 1st International Conference on Conversational User Interfaces; 2019 Jul 16-17; Dublin, Ireland. New York, NY: Association for Computing Machinery; 2019. p. 1. DOI: https://doi.org/10.1145/3342775.3342784
- Chen M, Tworek J, Jun H, Yuan Q, Pinto HPO, Kaplan J, et al. Evaluating large language models trained on code. [Preprint]. arXiv:2107.03374. 2021. DOI: 10.48550/arXiv.2107.03374.
- Das R, Sandhane R. Artificial intelligence in cyber security. J Phys Conf Ser. 2021;1964:042072. DOI: 10.1088/1742-6596/1964/4/042072.
- Ivanov D, Dolgui A. A digital supply chain twin for managing the disruption risks and resilience in the era of Industry 4.0. Prod Plan Control. 2021;32:775–788. DOI: 10.1080/09537287. 1768450.
- Putha S. AI-driven predictive maintenance for smart manufacturing: Enhancing equipment reliability and reducing downtime. J Deep Learn Genomic Data Anal. 2022;2:160–203.

Journal of Web Engineering & Technology
| Volume | 12 |
| Issue | 01 |
| Received | 27/12/2024 |
| Accepted | 07/01/2025 |
| Published | 18/01/2025 |
| Publication Time | 22 Days |
Login
PlumX Metrics