This is an unedited manuscript accepted for publication and provided as an Article in Press for early access at the author’s request. The article will undergo copyediting, typesetting, and galley proof review before final publication. Please be aware that errors may be identified during production that could affect the content. All legal disclaimers of the journal apply.
Naviya Shetty,
Anuja Shinde,
Shiksha Dubey,
- Research Scholar, Department of Computer Science and Engineering, Thakur Institute of Management Studies, Career Development & Research (TIMSCDR), Maharashtra, India
- Research Scholar, Department of Computer Science and Engineering, Thakur Institute of Management Studies, Career Development & Research (TIMSCDR), Maharashtra, India
- Research Supervisor, Department of Computer Science and Engineering, Thakur Institute of Management Studies, Career Development & Research (TIMSCDR), Maharashtra, India
Abstract
The most likely phase in achieving powerful and robust machine learning models is probably the hyperparameters tuning step. The traditional exhaustive methods of search (Grid Search and others) ensure that the search space is covered, but are computationally very inexpensive; random search is less expensive and can still miss good regions; and lastly, the modern model-based and population-based methods (Bayesian Optimization, Tree-structured Parzen Estimator (TPE), Genetic Algorithms) are thought to provide better trade-offs between performance and resource utilization. This study performs a comparative study of five tuning algorithms (Grid Search, Random Search, Bayesian Optimization, TPE/Optuna, Genetic Algorithm) applied to a wide range of supervised models (Random Forest, XGBoost, SVM, MLP) on a number of classification and regression data sets. To record the results, we configured the experiments to measure accuracy (or RMSE), running time, memory consumption, and interpret and visualize the results in one of the analytical dashboards of Streamlit. Based on the results, model-based optimizers (TPE and Bayesian) achieved near-optimal solutions at a fraction of the time and memory that I needed theGrid Search method, thus is a suitable fit in low-resource environments. The paper outlines replication procedure and stages and technical configurations of the dashboard and export/reporting pipeline (refer to project files in the code modules). The choice of hyperparameters, including learning rates, regularization coefficients, the depth of trees, estimators number, etc., affect the model generalization and efficiency greatly. It is not always possible to run an exhaustive tuning of a large parameter grid when using complex models and multiple datasets, thus the importance of using more intelligent searching strategies increases again, particularly when they are limited to running the experiments on small CPUs, memory, or edge hardware. Here, we comparatively tune a range of tuning methods on smaller data and track model and wrap experiments, visualization, and statistical testing on a reusable Streamlit dashboard. We will arrive at solutions to both tuning strategies that provided a good trade-off between accuracy and computational cost and provide a well-documented, reproducible workflow to practitioners.
Keywords: Machine Learning Models, Hyperparameter Optimization, Model Comparison, Benchmark Datasets, Performance Evaluation
Naviya Shetty, Anuja Shinde, Shiksha Dubey. Comparison of Models of Machine Learning and Hyperparameter optimization methods on various datasets. Recent Trends in Programming languages. 2026; 13(01):-.
Naviya Shetty, Anuja Shinde, Shiksha Dubey. Comparison of Models of Machine Learning and Hyperparameter optimization methods on various datasets. Recent Trends in Programming languages. 2026; 13(01):-. Available from: https://journals.stmjournals.com/rtpl/article=2026/view=242305
References
[1] Yang L, Shami A. On hyperparameter optimization of machine learning
algorithms: Theory and practice. Neurocomputing. 2020 Nov 20;415:295-
316.
[2] Bischl B, Binder M, Lang M, Pielok T, Richter J, Coors S, Thomas J,
Ullmann T, Becker M, Boulesteix AL, Deng D. Hyperparameter
optimization: Foundations, algorithms, best practices, and open challenges.
Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery.
2023 Mar;13(2):e1484.
[3] Turner R, Eriksson D, McCourt M, Kiili J, Laaksonen E, Xu Z, Guyon I.
Bayesian optimization is superior to random search for machine learning
hyperparameter tuning: Analysis of the black-box optimization challenge
2020. InNeurIPS 2020 competition and demonstration track 2021 Aug 7 (pp.
3-26). PMLR.
[4] Dasgupta S, Sen J. A comparative study of hyperparameter tuning methods.
Data Science in Theory and Practice. 2024 Sep 13:258.
[5] Franceschi L, Donini M, Perrone V, Klein A, Archambeau C, Seeger M,
Pontil M, Frasconi P. Hyperparameter optimization in machine learning.
Foundations and Trends® in Machine Learning. 2025 Oct 2;18(6):975-1109.
[6] Snoek J, Larochelle H, Adams RP. Practical bayesian optimization of
machine learning algorithms. Advances in neural information processing
systems. 2012;25.
[7] Bergstra J, Yamins D, Cox D. Making a science of model search:
Hyperparameter optimization in hundreds of dimensions for vision
architectures. InInternational conference on machine learning 2013 Feb 13
(pp. 115-123). PMLR.
[8] Li L, Jamieson K, DeSalvo G, Rostamizadeh A, Talwalkar A. Hyperband:
Bandit-based configuration evaluation for hyperparameter optimization.
InInternational Conference on Learning Representations 2017 Feb 6.
[9] Feurer M, Hutter F. Hyperparameter optimization. InAutomated machine
learning: Methods, systems, challenges 2019 May 18 (pp. 3-33). Cham:
Springer International Publishing.
[10]. Akiba T, Sano S, Yanase T, Ohta T, Koyama M. Optuna: A next-generation
hyperparameter optimization framework. InProceedings of the 25th ACM
SIGKDD international conference on knowledge discovery & data mining
2019 Jul 25 (pp. 2623-2631).

Recent Trends in Programming languages
| Volume | 13 |
| 01 | |
| Received | 10/03/2026 |
| Accepted | 02/04/2026 |
| Published | 30/04/2026 |
| Publication Time | 51 Days |
Login
PlumX Metrics