A Study on DCGAN-Based Generative Models: Implementing Anime Character Face Generation

Notice

This is an unedited manuscript accepted for publication and provided as an Article in Press for early access at the author’s request. The article will undergo copyediting, typesetting, and galley proof review before final publication. Please be aware that errors may be identified during production that could affect the content. All legal disclaimers of the journal apply.

Year : 2026 | Volume : 13 | 01 | Page :
    By

    Aaditya Mittal,

  • Harsh Bathija,

  • Yamini Sharma,

  • Shreya Agarwal,

  1. Student, Department of Computer Science and Engineering, JECRC University, Jaipur, Rajasthan, India
  2. Student, Department of Computer Science and Engineering, JECRC University, Jaipur, Rajasthan, India
  3. Student, Department of Computer Science and Engineering, JECRC University, Jaipur, Rajasthan, India
  4. Assistant Professor, Department of Computer Science and Engineering, JECRC University, Jaipur, Rajasthan, India

Abstract

Artificial Intelligence or AI, has in recent years moved from simple rule-based systems to models which are now fully capable of creative content generation and are referred to as Generative AI. One such approach for content generation, introduced in the year 2014, is called Generative Adversarial Networks (GANs), which consists of training a generator to create fake content that tries to mimic real content as closely as possible and a discriminator that tries to differentiate whether the content is real or fake, both contending against each other. Although GANs did show great potential for content generation, they lacked any stable way to generate images since their shallow architecture led to high volatility in training, with poor convergence, and poor image quality. To overcome these challenges, Deep Convolutional Generative Adversarial Networks (DCGANs) were introduced in the year 2015, which built upon traditional GANs by including convolutional and batch normalization layers in its architecture that allowed for a stable and spatially coherent training for image generation. This project will research deeply into the performance of DCGAN and its advanced versions with further optimizations in its architecture to overcome the limitations of GANs in generating images. In this study, a DCGAN model was trained to generate anime character images, exploring the benefits of convolutional architectures that allow stable training of adversarial models to improve the generated output qualitatively.

Keywords: Deep Convolutional Generative Adversarial Network, discriminator, generator, mode collapse

How to cite this article:
Aaditya Mittal, Harsh Bathija, Yamini Sharma, Shreya Agarwal. A Study on DCGAN-Based Generative Models: Implementing Anime Character Face Generation. Journal of Multimedia Technology & Recent Advancements. 2026; 13(01):-.
How to cite this URL:
Aaditya Mittal, Harsh Bathija, Yamini Sharma, Shreya Agarwal. A Study on DCGAN-Based Generative Models: Implementing Anime Character Face Generation. Journal of Multimedia Technology & Recent Advancements. 2026; 13(01):-. Available from: https://journals.stmjournals.com/jomtra/article=2026/view=242125


References

  1. Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde- Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. Advances in neural information processing systems. 2014;27.
  2. Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. 2015 Nov 19.
  3. Arjovsky M, Chintala S, Bottou L. Wasserstein generative adversarial networks. InInternational conference on machine learning 2017 Jul 17 (pp. 214-223). PMLR.
  4. Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X. Improved techniques for training gans. Advances in neural information processing systems. 2016;29.
  5. Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville AC. Improved training of wasserstein gans. Advances in neural information processing systems. 2017;30.
  6. Jin Y, Zhang J, Li M, Tian Y, Zhu H, Fang Z. Towards the automatic anime characters creation with generative adversarial networks. arXiv preprint arXiv:1708.05509. 2017 Aug 18.
  7. Li B, Zhu Y, Wang Y, Lin CW, Ghanem B, Shen L. Anigan: Style-guided generative adversarial networks for unsupervised anime face generation. IEEE Transactions on Multimedia. 2021 Sep 20;24:4077-91.
  8. Zhu JY, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. InProceedings of the IEEE international conference on computer vision 2017 (pp. 2223-2232).
  9. Liu MY, Tuzel O. Coupled generative adversarial networks. Advances in neural information processing systems. 2016;29.
  10. Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks. InProceedings of the IEEE/CVF conference on computer

Ahead of Print Subscription Review Article
Volume 13
01
Received 12/01/2026
Accepted 09/02/2026
Published 20/03/2026
Publication Time 67 Days


Login


My IP

PlumX Metrics