Educating Compilers to Learn: Utilizing Machine Learning for More Brilliant Code Optimization

Year : 2025 | Volume : 16 | Issue : 01 | Page : 50 54
    By

    Manish Kumar Jha,

  • Shambhu Kumar Mishra,

  1. Research Scholar, Department of Mathematics, Patliputra University, Bihar, India
  2. Professor, Department of Mathematics, Patliputra University, Bihar, India

Abstract

This study explores the use of machine learning (ML) approaches to compiler optimization. The now-traditional static compilation techniques are transformed into adaptive, dynamic systems capable of making context-specific advancements. Traditional compilers rely mostly on heuristic or rule-based optimization techniques. While these techniques work well in general cases, they consistently fail to adapt well within the limits of code structures that modern machines display. This limitation is especially acute in today’s computational landscape, where software applications are increasingly complex and require ever-growing resources. On the other hand, machine learning models like neural networks and reinforcement learning (RL) offer a paradigm shift because compilers can learn from past performance data and adjust optimizations dynamically to the unique features of individual codebases. This study further evaluates the application of supervised, unsupervised, and reinforcement learning models to the prediction of optimal compiler configurations. It aims at improving critical performance metrics like execution time, memory usage, and energy efficiency. The experimental results show enhancements across various benchmarks, including reduced execution times, less memory usage, and superior energy efficiency compared to state-of-the-art compiler optimization methods. Reinforcement learning models were highly adaptable, yielding great performance in dynamic and variable computational environments, whereas supervised and unsupervised models proved more effective in predictable and clustered codebases. These results emphasize the potential of ML-based compiler optimization in the creation of smarter and more flexible compiler systems. In doing so, ML might empower compilers to dynamically respond to a diverse set of requirements imposed by applications and computational environments, thus creating the next generation of self-optimizing compilers with unprecedented performance scalability. This work validates the position of ML as an essential ingredient in modern compiler designs in order to cope with increasing demands of advanced computational workloads.

Keywords: Machine learning, compiler optimization, fortification learning, neural systems, versatile compilation

[This article belongs to Journal of Computer Technology & Applications ]

How to cite this article:
Manish Kumar Jha, Shambhu Kumar Mishra. Educating Compilers to Learn: Utilizing Machine Learning for More Brilliant Code Optimization. Journal of Computer Technology & Applications. 2024; 16(01):50-54.
How to cite this URL:
Manish Kumar Jha, Shambhu Kumar Mishra. Educating Compilers to Learn: Utilizing Machine Learning for More Brilliant Code Optimization. Journal of Computer Technology & Applications. 2024; 16(01):50-54. Available from: https://journals.stmjournals.com/jocta/article=2024/view=190018


References

  1. Siemieniuk A, Chelini L, Khan AA, Castrillon J, Drebes A, Corporaal H, Grosser T, Kong M. OCC: An automated end-to-end machine learning optimizing compiler for computing-in-memory. IEEE Trans Comput-Aided Des Integr Circuits Syst. 2021 Aug 2; 41(6): 1674–86.
  2. Chen C, Zhang P, Zhang H, Dai J, Yi Y, Zhang H, Zhang Y. Deep learning on computational‐resource‐limited platforms: A survey. Mob Inf Syst. 2020; 2020(1): 8454327.
  3. Zhang H, Xing M, Wu Y, Zhao C. Compiler Technologies in Deep Learning Co-Design: A Survey. Intelligent Computing. 2023 Jun 19; 2: 0040.
  4. Patel H, Ramanan BA, Khan MA, Williams T, Friedman B, Drabeck L. Automating Code Adaptation for MLOps–A Benchmarking Study on LLMs. arXiv preprint arXiv:2405.06835. 2024 May 10.
  5. Sutton RS, Barto AG. Reinforcement Learning: An Introduction. 2nd ed. Cambridge (MA): MIT Press; 2018.
  6. Li M, Liu Y, Liu X, Sun Q, You X, Yang H, Luan Z, Gan L, Yang G, Qian D. The deep learning compiler: A comprehensive survey. IEEE Trans Parallel Distrib Syst. 2020 Oct 13; 32(3): 708–27.
  7. Babu RG, Nedumaran A, Manikandan G, Selvameena R. Tensorflow: Machine learning using heterogeneous edge on distributed systems. In Deep Learning in Visual Computing and Signal Processing; Apple Academic Press. 2022 Oct 20; 71–90.
  8. Werner M, Servadei L, Wille R, Ecker W. Automatic compiler optimization on embedded software through k-means clustering. In Proceedings of the 2020 ACM/IEEE Workshop on Machine Learning for CAD. 2020 Nov 16; 157–162.
  9. Lee B, Lu L, Wang T, Kim T, Lee W. From zygote to morula: Fortifying weakened aslr on android. In 2014 IEEE Symposium on Security and Privacy. 2014 May 18; 424–439.
  10. Tripathy HK, Mishra S, Rout M, Balamurugan S, Mishra S, editors. Optimized Computational Intelligence Driven Decision-making: Theory, Application and Challenges. Hoboken (NJ): John Wiley & Sons; 2024.
  11. Rahmani TA, Belalem G, Mahmoudi SA, Merad‐Boudia OR. Equalizer: Energy‐efficient machine learning‐based heterogeneous cluster load balancer. Concurr Comput: Pract Exp. 2024 Oct 25; 36(23): e8230.
  12. Raschka S, Patterson J, Nolet C. Machine learning in python: Main developments and technology trends in data science, machine learning, and artificial intelligence. Information. 2020 Apr 4; 11(4): 193.
  13. Kumar D, Bezdek JC, Palaniswami M, Rajasegarar S, Leckie C, Havens TC. A hybrid approach to clustering in big data. IEEE Trans Cybern. 2015 Sep 29; 46(10): 2372–85.
  14. Haneda M, Knijnenburg PM, Wijshoff HA. Optimizing general purpose compiler optimization. In Proceedings of the 2nd conference on Computing frontiers. 2005 May 4; 180–188.

Regular Issue Subscription Review Article
Volume 16
Issue 01
Received 14/11/2024
Accepted 25/11/2024
Published 18/12/2024



My IP

PlumX Metrics