[{“box”:0,”content”:”n[if 992 equals=”Open Access”]n
n
Open Access
nn
n
n[/if 992]n[if 2704 equals=”Yes”]n
nThis is an unedited manuscript accepted for publication and provided as an Article in Press for early access at the author’s request. The article will undergo copyediting, typesetting, and galley proof review before final publication. Please be aware that errors may be identified during production that could affect the content. All legal disclaimers of the journal apply.n
n[/if 2704]n
n
n
nn
n
Nagajayant Nagamani,
n t
n
n[/foreach]
n
n[if 2099 not_equal=”Yes”]n
- [foreach 286] [if 1175 not_equal=””]n t
- Software Engagement, Virtusa, Chennai, Tamil Nadu, India
n[/if 1175][/foreach]
n[/if 2099][if 2099 equals=”Yes”][/if 2099]n
Abstract
n
n
nUnsupervised representation learning has become a cornerstone of contemporary machine learning, enabling algorithms to extract informative features from un-labelled, high-dimensional data. This work investigates the efficacy of stacked denoising autoencoders (SDAEs) trained via parallelized stochastic gradient descent (SGD) as a scalable approach to feature extraction. By strategically leveraging multi- threaded computation, our study systematically examines the trade-offs between increased parallelism, training efficiency, and the preservation of model accuracy. Experiments on the MNIST digit classification benchmark demonstrate that parallelization of the autoencoder training process significantly reduces convergence time, particularly as the dimensionality of the hidden layers grows. Speedup due to multi-threading, while substantial, was occasionally sublinear due to inherent sequential dependencies in the backpropagation algorithm and resource contention during memory access. Moreover, visualizations of learned filters illustrate the model’s ability to capture meaningful patterns in the input space, while reconstructions from noisy inputs underline the robustness of the denoising criterion. Classification using stacked representations achieves competitive accuracy compared to state-of-the-art supervised models such as SVMs, highlighting the practical utility of unsupervised pretraining. Beyond SGD, the paper discusses the promise of evolutionary optimization algorithms—in particular, genetic algorithms—as highly parallelizable and potentially more robust alternatives for training deep architectures on challenging, noisy datasets. Overall, the study emphasizes scalable training techniques for deep unsupervised models and outlines future directions for enhancing training speed, generalizability, and resilience to complex data variations. These findings contribute to the broader field of deep learning by showcasing pathways for improving both the computational efficiency and the qualitative performance of neural network-based feature learning systems.nn
n
Keywords: Autoencoder, Machine learning, unsupervised learning, Stochastic Gradient Descent, Genetic Algorithms
n[if 424 equals=”Regular Issue”][This article belongs to Journal of Image Processing & Pattern Recognition Progress ]
n
n
n
n
nNagajayant Nagamani. [if 2584 equals=”][226 wpautop=0 striphtml=1][else]Accelerating Unsupervised Feature Learning: Parallelized Training of Denoising Autoencoders[/if 2584]. Journal of Image Processing & Pattern Recognition Progress. 28/08/2025; 12(03):-.
n
nNagajayant Nagamani. [if 2584 equals=”][226 striphtml=1][else]Accelerating Unsupervised Feature Learning: Parallelized Training of Denoising Autoencoders[/if 2584]. Journal of Image Processing & Pattern Recognition Progress. 28/08/2025; 12(03):-. Available from: https://journals.stmjournals.com/joipprp/article=28/08/2025/view=0
nn
n
n[if 992 not_equal=”Open Access”]n
n
n[/if 992]n
nn
nnnBrowse Figures
n
n
n[/if 379]
n
n
n
References n
n[if 1104 equals=””]n
- Y. Bengio, A. C. Courville, and P. Vincent, “Unsupervised feature learning and deep learning: A review and new perspectives,” CoRR, vol. abs/1206.5538, 2012. [Online]. Available: http://arxiv.org/abs/1206.5538
- P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” The Journal of Machine Learning Research, vol. 11, pp. 3371–3408, 2010.
- G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput., vol. 18, no. 7, pp. 1527–1554, Jul. 2006. [Online]. Available: http://dx.doi.org/10.1162/neco.2006.18.7.1527
- R. Hecht-Nielsen, “Theory of the backpropagation neural network,” in Neural Networks, 1989. IJCNN., International Joint Conference on. IEEE, 1989, pp. 593–605.
- L. Bottou, “Stochastic gradient learning in neural networks,” in Proceedings of Neuro-Nˆımes 91. Nimes, France: EC2, 1991. [Online]. Available: http://leon.bottou.org/papers/bottou-91c
- M. Srinivas and L. M. Patnaik, “Genetic algorithms: A survey,” Computer, vol. 27, no. 6, pp. 17–26, 1994.
- E. Cantu-Paz, “A survey of parallel genetic algorithms,”´ Calculateurs paralleles, reseaux et systems repartis, vol. 10, no. 2, pp. 141–171, 1998.
- Thinsungnoen T, Kerdprasop K, Kerdprasop N. Deep autoencoder networks optimized with genetic algorithms for efficient ECG clustering. Int. J. Mach. Learn. Comput. 2018 Apr;8(2):112-6.
- Gilanie G, Shafiq H, Batool SN, Abbas SN, Shafique H, Cheema S, Latif A, Saher A, Ahsan M. PARAMETER OPTIMIZATION OF AUTOENCODER FOR IMAGE CLASSIFICATION USING GENETIC ALGORITHM. Spectrum of Engineering Sciences. 2025 Apr 9;3(4):201-13.
- Ram PK, Kuila P. GAAE: a novel genetic algorithm based on autoencoder with ensemble classifiers for imbalanced healthcare data. The journal of supercomputing. 2023 Jan;79(1):541-72.
nn[/if 1104][if 1104 not_equal=””]n
- [foreach 1102]n t
- [if 1106 equals=””], [/if 1106][if 1106 not_equal=””],[/if 1106]
n[/foreach]
n[/if 1104]
n
nn[if 1114 equals=”Yes”]n
n[/if 1114]
n
n

n
Journal of Image Processing & Pattern Recognition Progress
n
n
n
n
nn
n
| Volume | 12 | |
| [if 424 equals=”Regular Issue”]Issue[/if 424][if 424 equals=”Special Issue”]Special Issue[/if 424] [if 424 equals=”Conference”][/if 424] | 03 | |
| Received | 11/08/2025 | |
| Accepted | 19/08/2025 | |
| Published | 28/08/2025 | |
| Retracted | ||
| Publication Time | 17 Days |
n
n
nn
n
Login
PlumX Metrics
n
n
n[if 1746 equals=”Retracted”]n
[/if 1746]nnn
nnn”}]