Vibhu Verma,
- Principal Data Scientist, Department of Computer Science, GWU, Capital One, NY, USA
Abstract
This paper presents an intrinsic evaluation of some graph embedding techniques on clustering and community detection tasks. We analyze a diverse set of embedding methods, ranging from traditional techniques such as Laplacian eigenmaps to more recent approaches like graph autoencoders, high-order proximity preserved embedding (HOPE), and graph attention network (GAT), using two widely studied datasets, Cora and CiteSeer. Our evaluation relies on two main metrics: Silhouette score with respect to cluster separation and Modularity regarding community detection. The results show that techniques like HOPE, GraRep, and GAT are very good at capturing community structure. Among these, GraRep seems to be the most balanced for both Silhouette and Modularity scores. Other techniques, while very good at creating well-separated clusters, such as structural deep network embedding (SDNE), perform poorly regarding community detection. In contrast, the approaches of Struc2Vec (simulated) and Poincaré exhibit poor performance along both axes. The current work has important implications for understanding strengths and limitations of different graph embedding methods for future research and applications on network analysis, social media analytics, and recommender systems.
Keywords: Graph embeddings, intrinsic evaluation, clustering, community detection, Silhouette score, Modularity, graph neural networks, HOPE, GraRep, GAT, SDNE, node2vec, DeepWalk, network analysis, community structure, machine learning, data representation
[This article belongs to International Journal of Algorithms Design and Analysis Review (ijadar)]
Vibhu Verma. Intrinsic Evaluation of Graph Embeddings: Assessing Clustering and Community Detection Performance. International Journal of Algorithms Design and Analysis Review. 2025; 03(01):40-48.
Vibhu Verma. Intrinsic Evaluation of Graph Embeddings: Assessing Clustering and Community Detection Performance. International Journal of Algorithms Design and Analysis Review. 2025; 03(01):40-48. Available from: https://journals.stmjournals.com/ijadar/article=2025/view=0
References
- Belkin M, Niyogi P. Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Dietterich TG, Becker S, Ghahramani Z, editors. NIPS 2001 – Advances in Neural Information Processing Systems 14, Cambridge, MA, USA: MIT Press; 2001.
- Mohan A, Venkatesan R, Pramod KV. A scalable method for link prediction in large real world networks. J Parallel Distrib Comput. 2017; 109: 89–101.
- Perozzi B, Al-Rfou R, Skiena S. Deepwalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, August 24–27, 2014. pp. 701–710.
- Grover A, Leskovec J. node2vec: Scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13–17, 2016. pp. 855–864.
- Ribeiro LF, Saverese PH, Figueiredo DR. struc2vec: Learning node representations from structural identity. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, Nova Scotia, Canada, August 13–17, 2017. pp. 385–394.
- Kipf TN, Welling M. Semi-supervised classification with graph convolutional networks. arXiv preprint. arXiv:1609.02907. September 9, 2016.
- Hamilton W, Ying Z, Leskovec J. Inductive representation learning on large graphs. In: NIPS 2017 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, December 4–9, 2017. pp. 1025–1035.
- Liu Z, Zhou J. Graph attention networks. In: Introduction to Graph Neural Networks. Cham, Switzerland: Springer International Publishing; 2020. pp. 39–41.
- Veličković P, Fedus W, Hamilton WL, Liò P, Bengio Y, Hjelm RD. Deep graph infomax. arXiv preprint arXiv:1809.10341. September 27, 2018.
- Zhu W, Wang X, Cui P. Deep learning for learning graph representations. In: Pedrycz W, Chen S-M, editors. Deep Learning: Concepts and Architectures. New York, NY, USA: Springer; 2020. pp. 169–210.
- Cao S, Lu W, Xu Q. Grarep: learning graph representations with global structural information. In: Proceedings of the 24th ACM International Conference on Information and Knowledge Management, Melbourne, Victoria, Australia, October 18–23, 2015. pp. 891–900.
- Kipf TN, Welling M. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308. November 21, 2016.
- Nickel M, Kiela D. Poincaré embeddings for learning hierarchical representations. In: NIPS 2017 – 31st Conference on Advances in Neural Information Processing Systems, Long Beach, CA, USA, December 4–9, 2017. pp. 6341–6350.
- Kota TK, Rongala S. Implementing AI-driven secure cloud data pipelines in Azure with Databricks. Nanotechnol Percept. 2024; 20 (S15): 3063–3075.
- Rongala S. An analytical review of not only SQL (NoSQL) databases: importance and evaluation. Int J Commun Netw Inform Security. 2023; 15 (3): 378–379.
- Rongala S. An overview of key principles of effective data visualization. Int J Intell Syst Appl Eng. 2024; 12 (4): 4847–4853.
- Joshi N. Optimizing real-time ETL pipelines using machine learning techniques. 2024. DOI: http://dx.doi.org/10.2139/ssrn.5054767
International Journal of Algorithms Design and Analysis Review
Volume | 03 |
Issue | 01 |
Received | 31/01/2025 |
Accepted | 10/02/2025 |
Published | 21/02/2025 |
Publication Time | 21 Days |
async function fetchCitationCount(doi) {
let apiUrl = `https://api.crossref.org/works/${doi}`;
try {
let response = await fetch(apiUrl);
let data = await response.json();
let citationCount = data.message[“is-referenced-by-count”];
document.getElementById(“citation-count”).innerText = `Citations: ${citationCount}`;
} catch (error) {
console.error(“Error fetching citation count:”, error);
document.getElementById(“citation-count”).innerText = “Citations: Data unavailable”;
}
}
fetchCitationCount(“10.37591/IJADAR.v03i01.0”);