AI-Newss 4.0 – Most Suitable LLM for UPSC Aspirants

Notice

This is an unedited manuscript accepted for publication and provided as an Article in Press for early access at the author’s request. The article will undergo copyediting, typesetting, and galley proof review before final publication. Please be aware that errors may be identified during production that could affect the content. All legal disclaimers of the journal apply.

Year : 2024 | Volume :02 | Issue : 02 | Page : –
By
vector

Priyansh Joshi,

  1. Student, Shri Gujrati Samaj Ajmera Mukesh Nemichand Bhai School Indore, Madhya Pradesh, India

Abstract document.addEventListener(‘DOMContentLoaded’,function(){frmFrontForm.scrollToID(‘frm_container_abs_108547’);});Edit Abstract & Keyword

In recent years, the domain of interactive Artificial Intelligence (AI) has experienced a significant surge, with Large Language Models (LLMs) at the forefront of this evolution. These AI systems, including those based on the GPT-3.5 framework, have been engineered to address various tasks, such as responding to intricate inquiries, participating in conversations, and executing sophisticated Natural Language Processing (NLP) operations. A prominent LLM, known for its adaptability, prompts an essential inquiry: Is it capable of serving as a full-time assistant, particularly in demanding contexts like UPSC examination preparation? This investigation aims to evaluate the capabilities of LLM-based systems as potential aid for UPSC candidates, scrutinizing their competencies in areas such as subject knowledge, practice examinations, contemporary affairs interpretation, and response composition abilities. The study is further contextualized by the UNESCO Report on AI in Higher Education (2023), which addresses crucial aspects including Pedagogy and Learning, Scientific Inquiry, Academic Honesty, Data Protection Concerns, Gender Equality and Inclusivity, and Regulatory Shortcomings. A range of assessments were conducted to gauge the LLM’s aptitude in addressing questions pertinent to the UPSC curriculum, encompassing topics from General Studies, Elective Subjects, Ethics, and Composition Writing. The evaluation considered factors such as precision, analytical thoroughness, and the capacity to engage in subtle discussions essential for UPSC preparation. Our results indicate that while LLMs exhibit significant promises as UPSC study aids, they face several constraints. For instance, they occasionally lack the analytical depth necessary for essay composition and struggle with real-time updates for current events. The research concludes by emphasizing these limitations and stresses the necessity for further enhancements in LLMs to improve their efficacy as full-time assistants within the UPSC exam preparation.

Keywords: LLMs, Conversational AI, summarization, newspapers, Natural Language Processing (NLP), UPSC examination

[This article belongs to International Journal of Computer Science Languages (ijcsl)]

How to cite this article:
Priyansh Joshi. AI-Newss 4.0 – Most Suitable LLM for UPSC Aspirants. International Journal of Computer Science Languages. 2024; 02(02):-.
How to cite this URL:
Priyansh Joshi. AI-Newss 4.0 – Most Suitable LLM for UPSC Aspirants. International Journal of Computer Science Languages. 2024; 02(02):-. Available from: https://journals.stmjournals.com/ijcsl/article=2024/view=0

Full Text PDF

References
document.addEventListener(‘DOMContentLoaded’,function(){frmFrontForm.scrollToID(‘frm_container_ref_108547’);});Edit

  1. Brown TB. Language models are few-shot learners. arXiv preprint arXiv:2005.14165. 2020.
  2. Wang S, Xu T, Li H, Zhang C, Liang J, Tang J, Yu PS, Wen Q. Large language models for education: A survey and outlook. arXiv preprint arXiv:2403.18105. 2024 Mar 26.
  3. Harrer S. Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine. EBioMedicine. 2023 Apr 1;90.
  4. Yang Z, Gan Z, Wang J, Hu X, Lu Y, Liu Z, Wang L. An empirical study of gpt-3 for few-shot knowledge-based vqa. InProceedings of the AAAI conference on artificial intelligence 2022 Jun 28 (Vol. 36, No. 3, pp. 3081-3089).
  5. Chen Y, Liu Y, Yan J, Bai X, Zhong M, Yang Y, Yang Z, Zhu C, Zhang Y. See what llms cannot answer: A self-challenge framework for uncovering llm weaknesses. arXiv preprint arXiv:2408.08978. 2024 Aug 16.
  6. Li H, Appleby G, Suh A. A Preliminary Roadmap for LLMs as Assistants in Exploring, Analyzing, and Visualizing Knowledge Graphs. arXiv preprint arXiv:2404.01425. 2024 Apr 1.
  7. Honda H, Hagiwara M. Deep-Learning-based Fuzzy Symbolic Processing with Agents Capable of Knowledge Communication. InICAART (3) 2022 (pp. 172-179).
  8. Bender EM, Gebru T, McMillan-Major A, Shmitchell S. On the dangers of stochastic parrots: Can language models be too big?. InProceedings of the 2021 ACM conference on fairness, accountability, and transparency 2021 Mar 3 (pp. 610-623).
  9. Sánchez-Adame LM, Mendoza S, Urquiza J, Rodríguez J, Meneses-Viveros A. Towards a set of heuristics for evaluating chatbots. IEEE Latin America Transactions. 2021 Jul 12;19(12):2037-45.
  10. Susnjak T, Maddigan P. Forecasting patient demand at urgent care clinics using explainable machine learning. CAAI Transactions on Intelligence Technology. 2023 Sep;8(3):712-33.
  11. Joosten J, Bilgram V, Hahn A, Totzek D. Comparing the ideation quality of humans with generative artificial intelligence. IEEE Engineering Management Review. 2024 Jan 12.
  12. Wu T, He S, Liu J, Sun S, Liu K, Han QL, Tang Y. A brief overview of ChatGPT: The history, status quo and potential future development. IEEE/CAA Journal of Automatica Sinica. 2023 May 1;10(5):1122-36.
  13. Casheekar A, Lahiri A, Rath K, Prabhakar KS, Srinivasan K. A contemporary review on chatbots, AI-powered virtual conversational agents, ChatGPT: Applications, open challenges and future research directions. Computer Science Review. 2024 May 1;52:100632.

Regular Issue Subscription Review Article
Volume 02
Issue 02
Received 11/10/2024
Accepted 19/10/2024
Published 21/10/2024

function myFunction2() {
var x = document.getElementById(“browsefigure”);
if (x.style.display === “block”) {
x.style.display = “none”;
}
else { x.style.display = “Block”; }
}
document.querySelector(“.prevBtn”).addEventListener(“click”, () => {
changeSlides(-1);
});
document.querySelector(“.nextBtn”).addEventListener(“click”, () => {
changeSlides(1);
});
var slideIndex = 1;
showSlides(slideIndex);
function changeSlides(n) {
showSlides((slideIndex += n));
}
function currentSlide(n) {
showSlides((slideIndex = n));
}
function showSlides(n) {
var i;
var slides = document.getElementsByClassName(“Slide”);
var dots = document.getElementsByClassName(“Navdot”);
if (n > slides.length) { slideIndex = 1; }
if (n (item.style.display = “none”));
Array.from(dots).forEach(
item => (item.className = item.className.replace(” selected”, “”))
);
slides[slideIndex – 1].style.display = “block”;
dots[slideIndex – 1].className += ” selected”;
}