Unmasking Hallucinations in Large Language Models Using Analysis of the LLAMA 2 Model and RAG Intervention

Year : 2025 | Volume : 12 | Issue : 01 | Page : 76-86
    By

    Atharva Patil,

  • Arohi Paigavan,

  • Amarti Dhamele,

  • Abbas Merchant,

  • Aditya Kasar,

  1. Student, Department of Electronics and Telecommunication Engineering, Shri Vile Parle Kelavani Mandal’s Narsee Monjee Institute of Management Studies, School of Technology, Management & Engineering, Navi Mumbai, Maharashtra, India
  2. Student, Department of Electronics and Telecommunication Engineering, Shri Vile Parle Kelavani Mandal’s Narsee Monjee Institute of Management Studies, School of Technology, Management & Engineering, Navi Mumbai, Maharashtra, India
  3. Student, Department of Electronics and Telecommunication Engineering, Shri Vile Parle Kelavani Mandal’s Narsee Monjee Institute of Management Studies, School of Technology, Management & Engineering, Navi Mumbai, Maharashtra, India
  4. Student, Department of Electronics and Telecommunication Engineering, Shri Vile Parle Kelavani Mandal’s Narsee Monjee Institute of Management Studies, School of Technology, Management & Engineering, Navi Mumbai, Maharashtra, India
  5. Student, Department of Electronics and Telecommunication Engineering, Shri Vile Parle Kelavani Mandal’s Narsee Monjee Institute of Management Studies, School of Technology, Management & Engineering, Navi Mumbai, Maharashtra, India

Abstract

The study describes the creation of a chatbot for financial trading called “TradeBot” and how it uses Retrieval Augmented Generation (RAG) to overcome the problem of producing false or unverifiable information, sometimes known as hallucinations. RAG allows the chatbot to refer to an external data source in addition to its taught knowledge, which increases the accuracy of its responses. The NCFM (NSE’s Certification in Financial Markets) book was integrated as an external data source by the authors of this study, who employed the Llama 2 Model for the chatbot and RAG implementation. We examine the ways in which RAG improves the accuracy of LLAMA 2 in financial trading scenarios by including the NCFM (NSE’s Certification in Financial Markets) book as an external data source. The results of our investigation demonstrate that the addition of RAG significantly lowers the frequency of hallucinations and enhances reaction reliability when LLAMA 2 is used with and without RAG. The study compared the chatbot’s responses to when RAG was used and when it was not, to show how RAG helps avoid hallucinations and guarantee that the chatbot delivers more accurate and trustworthy information.

Keywords: LLMs, hallucinations, RAG, llama 2, generative AI, hallucinations in LLMs, retrieval augmented generation

[This article belongs to Journal of Artificial Intelligence Research & Advances ]

How to cite this article:
Atharva Patil, Arohi Paigavan, Amarti Dhamele, Abbas Merchant, Aditya Kasar. Unmasking Hallucinations in Large Language Models Using Analysis of the LLAMA 2 Model and RAG Intervention. Journal of Artificial Intelligence Research & Advances. 2024; 12(01):76-86.
How to cite this URL:
Atharva Patil, Arohi Paigavan, Amarti Dhamele, Abbas Merchant, Aditya Kasar. Unmasking Hallucinations in Large Language Models Using Analysis of the LLAMA 2 Model and RAG Intervention. Journal of Artificial Intelligence Research & Advances. 2024; 12(01):76-86. Available from: https://journals.stmjournals.com/joaira/article=2024/view=181706


References

  1. Alkaissi H, McFarlane SI. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus. 2023 Feb; 15(2): e35179.
  2. Sharun K, Banu SA, Pawde AM, Kumar R, Akash S, Dhama K, Pal A. ChatGPT and artificial hallucinations in stem cell research: assessing the accuracy of generated references–a preliminary study. Ann Med Surg. 2023 Oct 1; 85(10): 5275–8.
  3. Krause D. Mitigating risks for financial firms using generative AI tools. Available at SSRN 4452600. 2023 May 18.
  4. Salvagno M, Taccone FS, Gerli AG. Artificial intelligence hallucinations. Crit Care. 2023 May 10; 27(1): 180.
  5. Azamfirei R, Kudchadkar SR, Fackler J. Large language models and the perils of their hallucinations. Crit Care. 2023 Mar 21; 27(1): 120.
  6. Lewis P, Perez E, Piktus A, Petroni F, Karpukhin V, Goyal N, Küttler H, Lewis M, Yih WT, Rocktäschel T, Riedel S. Retrieval-augmented generation for knowledge-intensive nlp tasks. Adv Neural Inf Process Syst. 2020; 33: 9459–74.
  7. Touvron H, Martin L, Stone K, Albert P, Almahairi A, Babaei Y, Bashlykov N, Batra S, Bhargava P, Bhosale S, Bikel D. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. 2023 Jul 18.
  8. Chowdhery A, Narang S, Devlin J, Bosma M, Mishra G, Roberts A, Barham P, Chung HW, Sutton C, Gehrmann S, Schuh P. Palm: Scaling language modeling with pathways. J Mach Learn Res. 2023; 24(240): 1–13.
  9. Block P, Tower R, Ring I. The National Stock Exchange of India Limited. 2023. Available from https://announcement.acesphere.com/Annoucement/20240209/c85b6347-29c8-4ee5-87fc-ae66ad22fd3c.pdf
  10. Tam A. (2023 Jul 20). A gentle introduction to hallucinations in large language models. [Online]. Machine Learning Mastery. https://machinelearningmastery.com/a-gentle-introduction-to-hallucinations-in-large-language-models/
  11. Li Y, Panda R, Kim Y, Chen CF, Feris RS, Cox D, Vasconcelos N. Valhalla: Visual hallucination for machine translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022; 5216–5226.
  12. Li J, Cheng X, Zhao WX, Nie JY, Wen JR. Halueval: A large-scale hallucination evaluation benchmark for large language models. arXiv preprint arXiv:2305.11747. 2023 May 19.
  13. Aurangzeb Ahmad M, Yaramis I, Dutta Roy T. Creating Trustworthy LLMs: Dealing with Hallucinations in Healthcare AI. arXiv e-prints arXiv: 2311. 2023 Sep.
  14. Rawte V, Sheth A, Das A. A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922. 2023 Sep 12.
  15. Tonmoy SM, Zaman SM, Jain V, Rani A, Rawte V, Chadha A, Das A. A comprehensive survey of hallucination mitigation techniques in large language models. arXiv preprint arXiv:2401.01313. 2024 Jan 2.
  16. Muennighoff N, Tazi N, Magne L, Reimers N. MTEB: Massive text embedding benchmark. arXiv preprint arXiv:2210.07316. 2022 Oct 13.
  17. Frantar E, Ashkboos S, Hoefler T, Alistarh D. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323. 2022 Oct 31.
  18. Lin J, Tang J, Tang H, Yang S, Chen WM, Wang WC, Xiao G, Dang X, Gan C, Han S. AWQ: Activation-aware Weight Quantization for On-Device LLM Compression and Acceleration. Proceedings of Machine Learning and Systems. 2024 May 29; 6: 87–100.
  19. Zheng L, Chiang WL, Sheng Y, Zhuang S, Wu Z, Zhuang Y, Lin Z, Li Z, Li D, Xing E, Zhang H. Judging llm-as-a-judge with mt-bench and chatbot arena. Adv Neural Inf Process Syst. 2023 Dec 15; 36: 46595–623.
  20. Wang A, Pruksachatkun Y, Nangia N, Singh A, Michael J, Hill F, Levy O, Bowman S. Superglue: A stickier benchmark for general-purpose language understanding systems. Adv Neural Inf Process Syst. 2019; 3266–3280.

Regular Issue Subscription Review Article
Volume 12
Issue 01
Received 11/07/2024
Accepted 05/11/2024
Published 08/11/2024


My IP

PlumX Metrics