This is an unedited manuscript accepted for publication and provided as an Article in Press for early access at the author’s request. The article will undergo copyediting, typesetting, and galley proof review before final publication. Please be aware that errors may be identified during production that could affect the content. All legal disclaimers of the journal apply.
Sohan Lal Gupta,
Vinod Kataria,
Arpita Sharma,
Vikram Khandelwal,
Anjali Pandey,
Vipin Gupta,
- Assistant Professor, Swami Keshvanand Institute of Technology Management & Gramothan, Jaipur, Rajasthan, India
- Assistant Professor, Swami Keshvanand Institute of Technology Management & Gramothan, Jaipur, Rajasthan, India
- Assistant Professor, Swami Keshvanand Institute of Technology Management & Gramothan, Jaipur, Rajasthan, India
- Assistant Professor, Swami Keshvanand Institute of Technology Management & Gramothan, Jaipur, Rajasthan, India
- Assistant Professor, Swami Keshvanand Institute of Technology Management & Gramothan, Jaipur, Rajasthan, India
- Assistant Professor, Suresh Gyan Vihar University, Jaipur, Rajasthan, India
Abstract
K-Means clustering is a widely used unsupervised learning algorithm for partitioning a dataset into distinct clusters. Despite its popularity and simplicity, K-Means has several limitations, such as sensitivity to initial centroids, convergence to local minima, and inefficiency with large datasets. This paper reviews recent advancements aimed at addressing these challenges and enhancing the performance of the K-Means algorithm. Innovations include improved initialization methods, such as K-Means++, which significantly reduce the chances of poor clustering results by selecting more optimal starting centroids. Additionally, optimization techniques, such as using advanced optimization algorithms and parallel processing, have been developed to accelerate convergence and handle larger datasets more efficiently. We also explore hybrid approaches that combine K-Means with other clustering algorithms to achieve more accurate and robust clustering outcomes. These advancements collectively contribute to the enhanced performance, scalability, and robustness of the K-Means algorithm, making it more suitable for a wider range of applications in data analysis and machine learning.
Keywords: Data mining, data clustering, centroids, SSE, k-means, distance metrices
[This article belongs to International Journal of Solid State Innovations & Research ]
Sohan Lal Gupta, Vinod Kataria, Arpita Sharma, Vikram Khandelwal, Anjali Pandey, Vipin Gupta. Advancements in K-Means Clustering: Boosting Algorithm Performance through Innovations. International Journal of Solid State Innovations & Research. 2025; 03(01):-.
Sohan Lal Gupta, Vinod Kataria, Arpita Sharma, Vikram Khandelwal, Anjali Pandey, Vipin Gupta. Advancements in K-Means Clustering: Boosting Algorithm Performance through Innovations. International Journal of Solid State Innovations & Research. 2025; 03(01):-. Available from: https://journals.stmjournals.com/ijssir/article=2025/view=0
References
- Singh M. A survey on various k-means algorithms for clustering. International Journal of Computer Science and Network Security (IJCSNS). 2015 Jun 1;15(6):60.
- Lamirel JC, Dugué N, Cuxac P. New efficient clustering quality indexes. In2016 International joint conference on neural networks (IJCNN) 2016 Jul 24 (pp. 3649–3657). IEEE.
- Joshi K, Gupta H, Chaudhary P, Sharma P. Survey on different enhanced K-means clustering algorithm. International Journal Of Engineering Trends And Technology. 2015;27.
- Qi J, Yu Y, Wang L, Liu J. K*-means: An effective and efficient k-means clustering algorithm. In2016 IEEE international conferences on big data and cloud computing (BDCloud), social computing and networking (SocialCom), sustainable computing and communications (SustainCom)(BDCloud-SocialCom-SustainCom) 2016 Oct 8 (pp. 242–249). IEEE.
- Arthur D, Vassilvitskii S. k-means++: The advantages of careful seeding. Stanford; 2006 Jun 7.
- Likas A, Vlassis N, Verbeek JJ. The global k-means clustering algorithm. Pattern recognition. 2003 Feb 1;36(2):451–61.
- Ackermann MR, Märtens M, Raupach C, Swierkot K, Lammersen C, Sohler C. Streamkm++ a clustering algorithm for data streams. Journal of Experimental Algorithmics (JEA). 2012 May 22;17:2–1.
- Ikotun AM, Ezugwu AE, Abualigah L, Abuhaija B, Heming J. K-means clustering algorithms: A comprehensive review, variants analysis, and advances in the era of big data. Information Sciences. 2023 Apr 1;622:178–210.
- Tang R, Fong S, Yang XS, Deb S. Integrating nature-inspired optimization algorithms to K-means clustering. InSeventh International Conference on Digital Information Management (ICDIM 2012) 2012 Aug 22 (pp. 116–123). IEEE.
- Kane A, Nagar J. Determining the number of clusters for a k-means clustering algorithm. Indian Journal of Computer Science and Engineering (IJCSE). 2012 Oct;3(5):670–2.
- Naz H, Saba T, Alamri FS, Almasoud AS, Rehman A. An improved robust fuzzy local information k-means clustering algorithm for diabetic retinopathy detection. IEEE Access. 2024 Apr 22.
- Dorigo M, Birattari M, Stutzle T. Ant colony optimization artificial ants as a computational intelligence technique. IEEE computational intelligence magazine. 2006 Nov;1(4):28.
- Dubey A, Choubey AP. A systematic review on k-means clustering techniques. Int J Sci Res Eng Technol (IJSRET, ISSN 2278–0882). 2017 Jun;6(6).
- Handhayani T, Wasito I. Fully unsupervised clustering in nonlinearly separable data using intelligent kernel k-means. In2014 International Conference on Advanced Computer Science and Information System 2014 Oct 18 (pp. 450–453). IEEE.
- Sun Y, Liu G, Xu K. A k-means-based projected clustering algorithm. In2010 Third International Joint Conference on Computational Science and Optimization 2010 May 28 (Vol. 1, pp. 466–470). IEEE.
- Blum C, Sampels M. When model bias is stronger than selection pressure. InInternational conference on parallel problem solving from nature 2002 Sep 7 (pp. 893–902). Berlin, Heidelberg: Springer Berlin Heidelberg.
- Blum C, Sampels M. Ant colony optimization for FOP shop scheduling: a case study on different pheromone representations. InProceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No. 02TH8600) 2002 May 12 (Vol. 2, pp. 1558–1563). IEEE.
- Khadem EA, Nezhad EF, Sharifi M. Data Mining: Methods & Utilities. Researcher. 2013;5(12):47–59.
- Ghosh S, Dubey SK. Comparative analysis of k-means and fuzzy c-means algorithms. International Journal of Advanced Computer Science and Applications. 2013;4(4).
- Sun H, Wang S, Jiang Q. FCM-based model selection algorithms for determining the number of clusters. Pattern recognition. 2004 Oct 1;37(10):2027–37.
- Li MJ, Ng MK, Cheung YM, Huang JZ. Agglomerative fuzzy k-means clustering algorithm with selection of number of clusters. IEEE transactions on knowledge and data engineering. 2008 Nov 30;20(11):1519–34.
- Awad FH, Hamad MM. Improved k-means clustering algorithm for big data based on distributed smartphoneneural engine processor. Electronics. 2022 Mar 11;11(6):883.
- Zubair M, Iqbal MA, Shil A, Chowdhury MJ, Moni MA, Sarker IH. An improved K-means clustering algorithm towards an efficient data-driven modeling. Annals of Data Science. 2024 Oct;11(5):1525–44.
- Nazeer KA, Sebastian MP. Improving the Accuracy and Efficiency of the k-means Clustering Algorithm. InProceedings of the world congress on engineering 2009 Jul 1 (Vol. 1, pp. 1-3). London, UK: Association of Engineers.
- MacQueen, J. (1967) Some Methods for Classification and Analysis of Multivariate Observations. Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, 1, 281–297. – References – Scientific Research Publishing. Scirp.org. 2016. Available from: https://www.scirp.org/reference/referencespapers?referenceid=1866605
| Volume | 03 |
| Issue | 01 |
| Received | 24/03/2025 |
| Accepted | 01/04/2025 |
| Published | 09/04/2025 |
| Publication Time | 16 Days |
async function fetchCitationCount(doi) {
let apiUrl = `https://api.crossref.org/works/${doi}`;
try {
let response = await fetch(apiUrl);
let data = await response.json();
let citationCount = data.message[“is-referenced-by-count”];
document.getElementById(“citation-count”).innerText = `Citations: ${citationCount}`;
} catch (error) {
console.error(“Error fetching citation count:”, error);
document.getElementById(“citation-count”).innerText = “Citations: Data unavailable”;
}
}
fetchCitationCount(“10.37591/IJSSIR.v03i01.0”);
