Methodology for Evaluating Code Synthesis in Large Language Models: ChatGPT and Copilot: A Review

Year : 2025 | Volume : 12 | Issue : 03 | Page : 01 07
    By

    Saurabh Sheoran,

  • Dinesh Kumar,

  1. Student, Department of Computer Science and Engineering, BRCM College of Engineering and Technology, Bahal, Bhiwani, Haryana, India
  2. Professor, Department of Computer Science and Engineering, BRCM College of Engineering and Technology, Bahal, Bhiwani, Haryana, India

Abstract

The authors introduce a comprehensive framework to assess the code-generation capabilities of large language models, focusing on ChatGPT and Copilot through a benchmark suite of 25 program synthesis tasks. Their main goal was to show why making proper comparisons is important, they did not focus on choosing the newest models, since they keep changing frequently. The critique examines how the methodology addresses both functional and non-functional aspects of code. In the functional testing domain, ChatGPT delivered 17 fully correct solutions, compared to Copilot’s 13 flawless outputs. Non-functional assessment, meanwhile, revealed that, despite overall decent quality, both models still exhibited recognizable code smells. Further, the study incorporated human evaluators to judge code quality, offering nuanced insights into each LLM’s strengths and weaknesses. The critique emphasizes the practical significance of these findings in helping developers choose the most appropriate LLM for their coding needs and highlights how systematic evaluation can guide tool selection in real-world development scenarios. LLM can be very useful in software development field for software developers. Finally, it advocates for expanding the evaluation lens to include more parameters in result, use of other LLMs and programming languages for evaluation, and newer model iterations to gain a more holistic understanding of LLMs’ code-synthesis abilities.

Keywords: Code generator, LLM, ChatGPT, CoPilot, software development

[This article belongs to Recent Trends in Programming languages ]

How to cite this article:
Saurabh Sheoran, Dinesh Kumar. Methodology for Evaluating Code Synthesis in Large Language Models: ChatGPT and Copilot: A Review. Recent Trends in Programming languages. 2025; 12(03):01-07.
How to cite this URL:
Saurabh Sheoran, Dinesh Kumar. Methodology for Evaluating Code Synthesis in Large Language Models: ChatGPT and Copilot: A Review. Recent Trends in Programming languages. 2025; 12(03):01-07. Available from: https://journals.stmjournals.com/rtpl/article=2025/view=232654


References

  1. Ságodi Z, Siket I, Ferenc R. Methodology for code synthesis evaluation of LLMs presented by a case study of ChatGPT and copilot. IEEE Access. 2024 May 21; 12: 72303–16.
  2. Helmuth T, Kelly P. PSB2: the second program synthesis benchmark suite. In Proceedings of the Genetic and Evolutionary Computation Conference. 2021 Jun 26; 785–794.
  3. Bubeck S, Chandrasekaran V, Eldan R, Gehrke J, Horvitz E, Kamar E, Lee P, Lee YT, Li Y, Lundberg S, Nori H. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712. 2023 Mar 22.
  4. Dieumegard A, Toom A, Pantel M. Model-based formal specification of a DSL library for a qualified code generator. In Proceedings of the 12th Workshop on OCL and Textual Modelling. 2012 Sep 30; 61–62.
  5. Black P. Static analyzers: Seat belts for your code. IEEE Secur Priv. 2012 Jan 10; 10(3): 48–52.
  6. Kaner C. Software engineering metrics: What do they measure and how do we know? In Proc Int’l Software Metrics Symposium, Chicago, IL, USA. 2004 Sep; 1–12.
  7. Vaithilingam P, Zhang T, Glassman EL. Expectation vs. experience: Evaluating the usability of code generation tools powered by large language models. In Chi conference on human factors in computing systems extended abstracts. 2022 Apr 27; 1–7.
  8. Al Madi N. How readable is model-generated code? examining readability and visual inspection of github copilot. In Proceedings of the 37th IEEE/ACM international conference on automated software engineering. 2022 Oct 10; 1–5.
  9. Marcilio D, Furia CA, Bonifácio R, Pinto G. Automatically generating fix suggestions in response to static code analysis warnings. In 2019 IEEE 19th International Working Conference on Source Code Analysis and Manipulation (SCAM). 2019 Sep 30; 34–44.
  10. Marcilio D, Bonifácio R, Monteiro E, Canedo E, Luz W, Pinto G. Are static analysis violations really fixed? a closer look at realistic usage of sonarqube. In2019 IEEE/ACM 27th International Conference on Program Comprehension (ICPC). 2019 May 25; 209–219.
  11. Zhang Y, Xiao Y, Kabir MM, Yao D, Meng N. Example-based vulnerability detection and repair in java code. In Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension. 2022 May 16; 190–201.
  12. Maddison C, Tarlow D. Structured generative models of natural source code. In International Conference on Machine Learning, PMLR. 2014 Jun 18; 649–657.
  13. Sobania D, Briesch M, Rothlauf F. Choose your programming copilot: a comparison of the program synthesis performance of github copilot and genetic programming. In Proceedings of the genetic and evolutionary computation conference. 2022 Jul 8; 1019–1027.
  14. Jain N, Vaidyanath S, Iyer A, Natarajan N, Parthasarathy S, Rajamani S, Sharma R. Jigsaw: Large language models meet program synthesis. In Proceedings of the 44th International Conference on Software Engineering. 2022 May 21; 1219–1231.
  15. Pearce H, Tan B, Ahmad B, Karri R, Dolan-Gavitt B. Examining zero-shot vulnerability repair with large language models. In 2023 IEEE Symposium on Security and Privacy (SP). 2023 May 21; 2339–2356.
  16. Asare O, Nagappan M, Asokan N. Is github’s copilot as bad as humans at introducing vulnerabilities in code? Empir Software Eng. 2023 Nov; 28(6): 129.
  17. White J, Hays S, Fu Q, Spencer-Smith J, Schmidt DC. Chatgpt prompt patterns for improving code quality, refactoring, requirements elicitation, and software design. In Generative AI for Effective Software Development. Cham: Springer Nature Switzerland; 2024 Jun 1; 71–108.

Regular Issue Subscription Review Article
Volume 12
Issue 03
Received 11/04/2025
Accepted 28/06/2025
Published 17/10/2025
Publication Time 189 Days


Login


My IP

PlumX Metrics