Unmasking Hallucinations in Large Language Models using analysis of the LLAMA 2 Model and RAG Intervention


Notice

This is an unedited manuscript accepted for publication and provided as an Article in Press for early access at the author’s request. The article will undergo copyediting, typesetting, and galley proof review before final publication. Please be aware that errors may be identified during production that could affect the content. All legal disclaimers of the journal apply.

Year : 2025 | Volume : 12 | Issue : 01 | Page : –
    By

    Atharva Patil,

  • Arohi Paigavan,

  • Amarti Dhamele,

  • Abbas Merchant,

  • Aditya Kasar,

  1. Student, Department of Electronics and Telecommunication Engineering, SVKM’s NMIMS, STME Navi Mumbai, Maharashtra, India
  2. Student, Department of Electronics and Telecommunication Engineering, SVKM’s NMIMS, STME Navi Mumbai, Maharashtra, India
  3. Student, Department of Electronics and Telecommunication Engineering, SVKM’s NMIMS, STME Navi Mumbai, Maharashtra, India
  4. Student, Department of Electronics and Telecommunication Engineering, SVKM’s NMIMS, STME Navi Mumbai, Maharashtra, India
  5. Student, Department of Electronics and Telecommunication Engineering, SVKM’s NMIMS, STME Navi Mumbai, Maharashtra, India

Abstract

The paper describes the creation of a chatbot for financial trading called “TradeBot” and how it uses Retrieval Augmented Generation (RAG) to overcome the problem of producing false or unverifiable information, sometimes known as hallucinations. RAG allows the chatbot to refer to an external data source in addition to its taught knowledge, which increases the accuracy of its responses. The NCFM (NSE’s Certification in Financial Markets) book was integrated as an external data source by the authors of this study, who employed the Llama 2 Model for the chatbot and RAG implementation. We examine how RAG improves the accuracy of LLAMA 2 in financial trading scenarios by integrating the NCFM (NSE Certification in Financial Markets) book as an external data source. The results of our investigation demonstrate that the addition of RAG significantly lowers the frequency of hallucinations and enhances reaction reliability when LLAMA 2 is used with and without RAG. The study compared the [chatbot’s responses to when RAG was used and when it wasn’t to show how RAG helps avoid hallucinations and guarantee that the chatbot delivers more accurate and trustworthy information.

Keywords: LLMs, hallucinations, RAG, llama 2, generative AI, Hallucinations in LLMs, Retrieval Augmented Generation

[This article belongs to Journal of Artificial Intelligence Research & Advances (joaira)]

How to cite this article:
Atharva Patil, Arohi Paigavan, Amarti Dhamele, Abbas Merchant, Aditya Kasar. Unmasking Hallucinations in Large Language Models using analysis of the LLAMA 2 Model and RAG Intervention. Journal of Artificial Intelligence Research & Advances. 2024; 12(01):-.
How to cite this URL:
Atharva Patil, Arohi Paigavan, Amarti Dhamele, Abbas Merchant, Aditya Kasar. Unmasking Hallucinations in Large Language Models using analysis of the LLAMA 2 Model and RAG Intervention. Journal of Artificial Intelligence Research & Advances. 2024; 12(01):-. Available from: https://journals.stmjournals.com/joaira/article=2024/view=181706


References

  1. “Block P, Tower R, Ring I. The National Stock Exchange of India Limited.” [Online]. Available: www.nseindia.com
  2. H. Alkaissi and S. I. Mcfarlane, “Artificial Hallucinations in ChatGPT: Implications in Scientific Writing,” 2023, doi: 10.7759/cureus.35179.
  3. Sharun K, Banu SA, Pawde AM, Kumar R, Akash S, Dhama K, Pal A. ChatGPT and artificial hallucinations in stem cell research: assessing the accuracy of generated references–a preliminary study. Annals of Medicine and Surgery. 2023 Oct 1;85(10):5275-8.
  4. Krause D. Mitigating risks for financial firms using generative AI tools. Available at SSRN 4452600. 2023 May 18.
  5. M. Salvagno, F. S. Taccone, and A. G. Gerli, “Artificial intelligence hallucinations,” Critical Care, vol. 27, no. 1. BioMed Central Ltd, Dec. 01, 2023. doi: 10.1186/s13054-023-04473-y.
  6. R. Azamfirei, S. R. Kudchadkar, and J. Fackler, “Large language models and the perils of their hallucinations,” Critical Care, vol. 27, no. 1. BioMed Central Ltd, Dec. 01, 2023. doi: 10.1186/s13054-023-04393-x.
  7. P. Lewis et al., “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks”, Accessed: Mar. 07, 2024. [Online]. Available: https://github.com/huggingface/transformers/blob/master/
  8. H. Touvron et al., “Llama 2: Open Foundation and Fine-Tuned Chat Models.”
  9. A. Chowdhery et al., “PaLM: Scaling Language Modeling with Pathways,” Apr. 2022, [Online]. Available: http://arxiv.org/abs/2204.02311
  10. A Gentle Introduction to Hallucinations in Large Language Models – MachineLearningMastery.com.” Accessed: Mar. 04, 2024. [Online]. Available: https://machinelearningmastery.com/a-gentle-introduction-to-hallucinations-in-large-language-models/
  11. Y. Li et al., “VALHALLA: Visual Hallucination for Machine Translation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, 2022, pp. 5206–5216. doi: 10.1109/CVPR52688.2022.00515.
  12. J. Li, X. Cheng, W. X. Zhao, J.-Y. Nie, and J.-R. Wen, “HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models”, Accessed: Feb. 28, 2024. [Online]. Available: https://github.com/RUCAIBox/HaluEval
  13. M. Aurangzeb Ahmad, I. Yaramis, and T. Dutta Roy, “Creating Trustworthy LLMs: Dealing with Hallucinations in Healthcare AI.”
  14. V. Rawte, A. Sheth, and A. Das, “A Survey of Hallucination in ‘Large’ Foundation Models”, Accessed: Feb. 28, 2024. [Online]. Available: https://github.com/vr25/
  15. S. Towhidul Islam Tonmoy et al., “A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models”.
  16. N. Muennighoff, N. Tazi, L. Magne, and N. Reimers, “MTEB: Massive Text Embedding Benchmark,” Oct. 2022, [Online]. Available: http://arxiv.org/abs/2210.07316
  17. E. Frantar, S. Ashkboos, T. Hoefler, and D. Alistarh, “GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers,” Oct. 2022, [Online]. Available: http://arxiv.org/abs/2210.17323
  18. J. Lin et al., “AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration”, Accessed: Mar. 08, 2024. [Online]. Available: https://github.com/mit-han-lab/llm-awq
  19. L. Zheng et al., “Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena”.
  20. A. Wang et al., “SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems”.

Regular Issue Subscription Review Article
Volume 12
Issue 01
Received 09/10/2024
Accepted 05/11/2024
Published 08/11/2024


Loading citations…