Handwriting Digital Image Generation based on GAN: A Comparative Study of Basic GAN and CGAN Models
The vast application of artificial intelligence in numerous fields—image generation being one of them—has been made possible by the quick development of deep learning. Generative Adversarial Networks (GAN) can generate high-quality images through an adversarial training mechanism. The use and perfor...
Saved in:
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
EDP Sciences
2025-01-01
|
Series: | ITM Web of Conferences |
Online Access: | https://www.itm-conferences.org/articles/itmconf/pdf/2025/01/itmconf_dai2024_03019.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1825206573809532928 |
---|---|
author | Zeng Hongzhi |
author_facet | Zeng Hongzhi |
author_sort | Zeng Hongzhi |
collection | DOAJ |
description | The vast application of artificial intelligence in numerous fields—image generation being one of them—has been made possible by the quick development of deep learning. Generative Adversarial Networks (GAN) can generate high-quality images through an adversarial training mechanism. The use and performance of GAN and its conditional variation, CGAN, in the field of handwritten digital image generation, are thoroughly examined in this research. The basic GAN and CGAN models, based on the PyTorch deep learning framework and the Modified National Institute of Standards and Technology (MNIST) dataset, are applied to generate handwritten digital images respectively. To assess and compare the variations between the two models concerning the fineness of image generation, the loss changes, and other relevant factors, the generation outcomes and the loss changes that occur during the training phase are documented. The experimental results demonstrate that, compared with the basic GAN, CGAN exhibits notable advantages in terms of image quality stability, the avoidance of model collapse, and the control of image categories. Furthermore, an investigation of other cutting-edge generating models indicates that there is still room for optimization in the CGAN network structure to improve its performance for increasingly intricate generative tasks. |
format | Article |
id | doaj-art-854353ba69f04c34a30e63a5baca5ef9 |
institution | Kabale University |
issn | 2271-2097 |
language | English |
publishDate | 2025-01-01 |
publisher | EDP Sciences |
record_format | Article |
series | ITM Web of Conferences |
spelling | doaj-art-854353ba69f04c34a30e63a5baca5ef92025-02-07T08:21:11ZengEDP SciencesITM Web of Conferences2271-20972025-01-01700301910.1051/itmconf/20257003019itmconf_dai2024_03019Handwriting Digital Image Generation based on GAN: A Comparative Study of Basic GAN and CGAN ModelsZeng Hongzhi0Guangdong University of TechnologyThe vast application of artificial intelligence in numerous fields—image generation being one of them—has been made possible by the quick development of deep learning. Generative Adversarial Networks (GAN) can generate high-quality images through an adversarial training mechanism. The use and performance of GAN and its conditional variation, CGAN, in the field of handwritten digital image generation, are thoroughly examined in this research. The basic GAN and CGAN models, based on the PyTorch deep learning framework and the Modified National Institute of Standards and Technology (MNIST) dataset, are applied to generate handwritten digital images respectively. To assess and compare the variations between the two models concerning the fineness of image generation, the loss changes, and other relevant factors, the generation outcomes and the loss changes that occur during the training phase are documented. The experimental results demonstrate that, compared with the basic GAN, CGAN exhibits notable advantages in terms of image quality stability, the avoidance of model collapse, and the control of image categories. Furthermore, an investigation of other cutting-edge generating models indicates that there is still room for optimization in the CGAN network structure to improve its performance for increasingly intricate generative tasks.https://www.itm-conferences.org/articles/itmconf/pdf/2025/01/itmconf_dai2024_03019.pdf |
spellingShingle | Zeng Hongzhi Handwriting Digital Image Generation based on GAN: A Comparative Study of Basic GAN and CGAN Models ITM Web of Conferences |
title | Handwriting Digital Image Generation based on GAN: A Comparative Study of Basic GAN and CGAN Models |
title_full | Handwriting Digital Image Generation based on GAN: A Comparative Study of Basic GAN and CGAN Models |
title_fullStr | Handwriting Digital Image Generation based on GAN: A Comparative Study of Basic GAN and CGAN Models |
title_full_unstemmed | Handwriting Digital Image Generation based on GAN: A Comparative Study of Basic GAN and CGAN Models |
title_short | Handwriting Digital Image Generation based on GAN: A Comparative Study of Basic GAN and CGAN Models |
title_sort | handwriting digital image generation based on gan a comparative study of basic gan and cgan models |
url | https://www.itm-conferences.org/articles/itmconf/pdf/2025/01/itmconf_dai2024_03019.pdf |
work_keys_str_mv | AT zenghongzhi handwritingdigitalimagegenerationbasedonganacomparativestudyofbasicganandcganmodels |