Encoder-Decoder Based Fine-Tuned Model for Code Doubt Solver

Year : 2025 | Volume : 12 | Issue : 03 | Page : 35 42
    By

    Utkarsh Yadav,

  • Shivesh, Shivam Shukla,

  • Ujjwal Sharma,

  • Richa Suryavanshi,

  1. Student, Department of Computer Science and Engineering, Echelon Institute of Technology, Faridabad, Haryana, India
  2. Student, Department of Computer Science and Engineering, Echelon Institute of Technology, Faridabad, Haryana, India
  3. Student, Department of Computer Science and Engineering, Echelon Institute of Technology, Faridabad, Haryana, India
  4. Student, Department of Computer Science and Engineering, Echelon Institute of Technology, Faridabad, Haryana, India

Abstract

As we are growing in technology, more technologically skilled persons are needed in industry. They all often rely on programming in their daily work, and when some doubts arise, they seek help from teachers to LLMs like GPT to Deepseek. However, when errors arise, then comes hectic part to troubleshoot and resolve the error. Usually, people seek help from some LLMs like GPT, or Deepseek for the solution; they give the solution, but they are general purpose model, which means they are not optimized for coding purpose, so they may lack in accuracy. Although most current models, rely on an encoder-only or decoder-only, and also the training of model is suboptimal for generation and code interpret tasks and process code snippets similar to natural language, ignoring programming language-specific symbols during tokenization. These all pre-tuned models are transformer based, either of Encoder based or Decoder based (e.g. BERT, GTP, RoBERTa, LLaMa, etc.) which make them capable of either understanding or generation individually. As Encoder based model is effective for understanding tasks only, not ideal for generation purpose, while Decoder based model is effective in generation from prompt. And this makes the pre-trained model inefficient for code debugging, code explanation, code completion and code generation purpose. So ideally, we should use Encoder-Decoder based transformer model (e.g. mT5, BART, T5, etc.) as the encoder comprehends the input, and the decoder generates relevant output, which makes it best for Code generation, Code completion, Code explanation and Code debugging. This study explores the optimization of encoder-decoder transformer architectures for code-related tasks by fine-tuning them specifically on programming language datasets. We evaluate their effectiveness in understanding and generating syntactically correct and semantically meaningful code, as well as solving user-generated coding doubts, code completion, code generation and code debugging. Our proposed method not only improves effectiveness in understanding but it also improves accuracy and optimizes outcome for coding purpose such as general programming doubt, code debugging, code generation, code completion, code explanation and much more. Experimental results demonstrate that the fine-tuned encoder-decoder model is outperforming the general purpose LLM, as it is better and more capable in Code correction, Code completion, Code generation, Code explanation, etc. The Encoder-Decoder model which is perfectly fine-tuned with dataset, that model can outperform and intelligently respond in the coding purpose, offering more reliable and accurate response. This research work lays the foundation to bridge the gap between Natural Language (NL) and Programming Language (PL).

Keywords: Encoder-decoder transformer, code generation, program repair, fine-tuning, natural language to code (NL2Code)

[This article belongs to Recent Trends in Programming languages ]

How to cite this article:
Utkarsh Yadav, Shivesh, Shivam Shukla, Ujjwal Sharma, Richa Suryavanshi. Encoder-Decoder Based Fine-Tuned Model for Code Doubt Solver. Recent Trends in Programming languages. 2025; 12(03):35-42.
How to cite this URL:
Utkarsh Yadav, Shivesh, Shivam Shukla, Ujjwal Sharma, Richa Suryavanshi. Encoder-Decoder Based Fine-Tuned Model for Code Doubt Solver. Recent Trends in Programming languages. 2025; 12(03):35-42. Available from: https://journals.stmjournals.com/rtpl/article=2025/view=232675


References

  1. Feng Z, Guo D, Tang D, Duan N, Feng X, Gong M, Shou L, Qin B, Liu T, Jiang D, Zhou M. Codebert: A pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155. 2020 Feb 19.
  2. Wang Y, Wang W, Joty S, Hoi SC. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. arXiv preprint arXiv:2109.00859. 2021 Sep 2.
  3. Lachaux MA, Roziere B, Chanussot L, Lample G. Unsupervised translation of programming languages. arXiv preprint arXiv:2006.03511. 2020 Jun 5.
  4. Chen M, Tworek J, Jun H, Yuan Q, Pinto HP, Kaplan J, Edwards H, Burda Y, Joseph N, Brockman G, Ray A. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. 2021 Jul 7.
  5. Austin J, Odena A, Nye M, Bosma M, Michalewski H, Dohan D, Jiang E, Cai C, Terry M, Le Q, Sutton C. Program synthesis with large language models. arXiv preprint arXiv:2108.07732. 2021 Aug 16.
  6. Nijkamp E, Pang B, Hayashi H, Tu L, Wang H, Zhou Y, Savarese S, Xiong C. Codegen: An open large language model for code with multi-turn program synthesis. arXiv preprint arXiv:2203.13474. 2022 Mar 25.
  7. Guo D, Ren S, Lu S, Feng Z, Tang D, Liu S, Zhou L, Duan N, Svyatkovskiy A, Fu S, Tufano M. Graphcodebert: Pre-training code representations with data flow. arXiv preprint arXiv:2009.08366. 2020 Sep 17.
  8. Piskounova OI. Triple-Pomeron Diffraction Peak as a Signature of UHE Proton-Initiated Spectra of Gammas and Neutrinos in Astrophysics. arXiv preprint arXiv:2302.08546. 2023 Feb 15.
  9. Zhang Q, Fang C, Ma Y, Sun W, Chen Z. A survey of learning-based automated program repair. ACM Trans Softw Eng Methodol. 2023 Dec 23; 33(2): 1–69.
  10. Lu S, Guo D, Ren S, Huang J, Svyatkovskiy A, Blanco A, Clement C, Drain D, Jiang D, Tang D, Li G. Codexglue: A machine learning benchmark dataset for code understanding and generation. arXiv preprint arXiv:2102.04664. 2021 Feb 9.
  11. Shah D, Eysenbach B, Kahn G, Rhinehart N, Levine S. Rapid exploration for open-world navigation with latent goal models. arXiv preprint arXiv:2104.05859. 2021 Apr 12.
  12. Li X, Liu S, Feng R, Meng G, Xie X, Chen K, Liu Y. Transrepair: Context-aware program repair for compilation errors. In Proceedings of the 37th IEEE/ACM international conference on automated software engineering. 2022 Oct 10; 1–13.
  13. Radford A, Narasimhan K, Salimans T, Sutskever I. Improving language understanding by generative pre-training. 2018: 1-12.
  14. Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S. Language models are few-shot learners. Adv Neural Inf Process Syst. 2020; 33: 1877–901.
  15. Sun Y, Wang S, Li Y, Feng S, Chen X, Zhang H, Tian X, Zhu D, Tian H, Wu H. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223. 2019 Apr 19.
  16. Lachaux MA, Roziere B, Chanussot L, Lample G. Unsupervised translation of programming languages. arXiv preprint arXiv:2006.03511. 2020 Jun 5.
  17. Huang J, Gu SS, Hou L, Wu Y, Wang X, Yu H, Han J. Large language models can self-improve. arXiv preprint arXiv:2210.11610. 2022 Oct 20.
  18. Nijkamp E, Pang B, Hayashi H, Tu L, Wang H, Zhou Y, Savarese S, Xiong C. Codegen: An open large language model for code with multi-turn program synthesis. arXiv preprint arXiv:2203.13474. 2022 Mar 25.
  19. Lu S, Guo D, Ren S, Huang J, Svyatkovskiy A, Blanco A, Clement C, Drain D, Jiang D, Tang D, Li G. Codexglue: A machine learning benchmark dataset for code understanding and generation. arXiv preprint arXiv:2102.04664. 2021 Feb 9.
  20. Fried D, Aghajanyan A, Lin J, Wang S, Wallace E, Shi F, Zhong R, Yih WT, Zettlemoyer L, Lewis M. Incoder: A generative model for code infilling and synthesis. arXiv preprint arXiv:2204.05999. 2022 Apr 12.
  21. Li P, Yang J, Islam MA, Ren S. Making ai less’ thirsty’. Commun ACM. 2025 Jul 1; 68(7): 54–61.
  22. Touvron H, Lavril T, Izacard G, Martinet X, Lachaux MA, Lacroix T, Rozière B, Goyal N, Hambro E, Azhar F, Rodriguez A. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. 2023 Feb 27.
  23. Rogava J, Tsiklauri M, Vashakidze Z. On stability and convergence of a three-layer semi-discrete scheme for an abstract analogue of the Ball integro-differential equation. J Math Anal Appl. 2023 Feb 1; 518(1): 126664.
  24. Svyatkovskiy A, Deng SK, Fu S, Sundaresan N. Intellicode compose: Code generation using transformer. In Proceedings of the 28th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering. 2020 Nov 8; 1433–1443.
  25. Austin J, Odena A, Nye M, Bosma M, Michalewski H, Dohan D, Jiang E, Cai C, Terry M, Le Q, Sutton C. Program synthesis with large language models. arXiv preprint arXiv:2108.07732. 2021 Aug 16.
  26. Ziegler DM, Stiennon N, Wu J, Brown TB, Radford A, Amodei D, Christiano P, Irving G. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593. 2019 Sep 18.
  27. Chowdhery A, Narang S, Devlin J, Bosma M, Mishra G, Roberts A, Barham P, Chung HW, Sutton C, Gehrmann S, Schuh P. Palm: Scaling language modeling with pathways. J Mach Learn Res. 2023; 24(240): 1–13.
  28. Mökander J, Schuett J, Kirk HR, Floridi L. Auditing large language models: a three-layered approach. AI Ethics. 2024 Nov; 4(4): 1085–115.
  29. Colón KD, Kreidberg L, Welbanks L, Line MR, Madhusudhan N, Beatty T, Tamburo P, Stevenson KB, Mandell A, Rodriguez JE, Barclay T. An unusual transmission spectrum for the sub-saturn KELT-11b suggestive of a subsolar water abundance. Astron J. 2020 Nov 23; 160(6): 280.
  30. Allamanis M, Barr ET, Devanbu P, Sutton C. A survey of machine learning for big code and naturalness. ACM Comput Surv. 2018 Jul 31; 51(4): 1–37.

Regular Issue Subscription Review Article
Volume 12
Issue 03
Received 09/07/2025
Accepted 07/10/2025
Published 17/10/2025
Publication Time 100 Days


Login


My IP

PlumX Metrics