[{“box”:0,”content”:”[if 992 equals=”Open Access”]n
n
Open Access
nn
n
n[/if 992]n
n
n
n
n

n
Sree Krishna Raja K, P. Jaya Krishna,
n
- n t
n
n
n[/foreach]
n
n[if 2099 not_equal=”Yes”]n
- [foreach 286] [if 1175 not_equal=””]n t
- Student,, Assistant Professor, Nalla Malla Reddy Engineering College,, Nalla Malla Reddy Engineering College, Hyderabad,, Hyderabad, India, India
n[/if 1175][/foreach]
n[/if 2099][if 2099 equals=”Yes”][/if 2099]n
Abstract
nThe performance of modern VLSI systems is heavily influenced by power constraints, necessitating precise power estimation and effective optimization techniques. Traditional methods, such as gate-level simulations, are often slow and computationally intensive. This paper introduces DRPENN (Differential Algorithm for Routing and Placement Optimization using Neural Networks), an innovative solution that combines a Switching Activity Estimator (SAE) with a neural network-assisted differential algorithm. By leveraging toggle rates from simulations to train a Graph Neural Network (GNN), DRPENN circumvents the need for extensive gate-level simulations. This trained model accurately predicts switching activity, optimizing the placement and routing of circuit elements while minimizing computational demands. The methodology involves converting gate-level netlists into graphical representations and using a differential algorithm for back-propagation. Our experimental results show that DRPENN achieves lower error rates and faster inference times compared to conventional probabilistic SAE methods, offering a promising approach for efficient and accurate VLSI design optimization.
n
Keywords: VLSI design, routing optimization, placement optimization, differential algorithm, neural networks
n[if 424 equals=”Regular Issue”][This article belongs to Journal of VLSI Design Tools and Technology(jovdtt)]
n
n
n
n
n
nn[if 992 equals=”Open Access”] Full Text PDF Download[/if 992] n
nn[if 379 not_equal=””]n
Browse Figures
n
n
n[/if 379]n
References
n[if 1104 equals=””]n
- Aisha Fayomi; Amal S. Hassan; Hanan Baaqeel; Ehab M. Almetwally., Bayesian Inference and Data Analysis of the Unit–Power Burr X Distribution. in Axioms, 2023, pp. 12(3), 297.
- Seyed Morteza Nabavinejad; Sherief Reda and Masoumeh Ebrahimi., BatchSizer: Power-Performance Trade-off for DNN Inference. in ACM, 2023, pp 7-13.
1.Yuan Zhou; Haxong Ren et al., “PRIMAL: Power Inference Using Machine Learning,” in DAC, 2019, pp. 39:1–39:6. 2.Yanqing Zhang; Haoxing Ren et al., “GRANNITE: Graph Neural Network Inference for Transferable Power Estimation” in IEEE, 2020, 978-1- 7281-1085-1/20
- Murata, H. Ishibuchi. “Performance evaluation of genetic algorithms for flowshop scheduling problems,” Proceedings of 1st IEEE Conference on Evolutionary Computation, 1994, pp. 812-817.
- Liu, T. Qingfu, K.L. Ma. “A novel differential evolution algorithm for solving integer programming problems,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 2010, Vol. 40, Issue 5, pp. 714-724.
- Kennedy, R. Eberhart. “Particle Swarm Optimization,” Proceedings of IEEE International Conference on Neural Networks, 1995, pp. 1942-1948.
- Dorigo, T. Stützle. “Ant Colony Optimization,” MIT Press, 2004.
- Mitchell. “An Introduction to Genetic Algorithms,” MIT Press, 1996.
- Kirkpatrick, C. D. Gelatt, M. P. Vecchi. “Optimization by Simulated Annealing,” Science, 1983, Vol. 220, No. 4598, pp. 671-680.
- Shi, R. Eberhart. “A Modified Particle Swarm Optimizer,” Proceedings of IEEE International Conference on Evolutionary Computation, 1998, pp. 69-73.
- Loshchilov, F. Hutter. “CMA-ES for Hyperparameter Optimization of Deep Neural Networks,” Proceedings of the International Conference on Learning Representations (ICLR), 2016.
nn[/if 1104][if 1104 not_equal=””]n
- [foreach 1102]n t
- [if 1106 equals=””], [/if 1106][if 1106 not_equal=””],[/if 1106]
n[/foreach]
n[/if 1104]
nn
nn[if 1114 equals=”Yes”]n
n[/if 1114]
n
n

n
n
n
n
n
| Volume | 14 | |
| [if 424 equals=”Regular Issue”]Issue[/if 424][if 424 equals=”Special Issue”]Special Issue[/if 424] [if 424 equals=”Conference”][/if 424] | 02 | |
| Received | July 19, 2024 | |
| Accepted | July 31, 2024 | |
| Published | August 7, 2024 |
n
n
n
n
n
n nfunction myFunction2() {nvar x = document.getElementById(“browsefigure”);nif (x.style.display === “block”) {nx.style.display = “none”;n}nelse { x.style.display = “Block”; }n}ndocument.querySelector(“.prevBtn”).addEventListener(“click”, () => {nchangeSlides(-1);n});ndocument.querySelector(“.nextBtn”).addEventListener(“click”, () => {nchangeSlides(1);n});nvar slideIndex = 1;nshowSlides(slideIndex);nfunction changeSlides(n) {nshowSlides((slideIndex += n));n}nfunction currentSlide(n) {nshowSlides((slideIndex = n));n}nfunction showSlides(n) {nvar i;nvar slides = document.getElementsByClassName(“Slide”);nvar dots = document.getElementsByClassName(“Navdot”);nif (n > slides.length) { slideIndex = 1; }nif (n (item.style.display = “none”));nArray.from(dots).forEach(nitem => (item.className = item.className.replace(” selected”, “”))n);nslides[slideIndex – 1].style.display = “block”;ndots[slideIndex – 1].className += ” selected”;n}n”}]