Chest X-ray Image Synthesis Using Deep Convolutional GANs

Date

2024-5

Type

Conference paper

Conference title

, IEEE 2014 International Symposium on Networks, Computers and Communications

Author(s)

Nada Ahmed Alhamdi
Mohammed Ezzedine Sunni

Abstract

Obtaining medical data for research purposes is often challenging due to its high cost, scarcity, and the requirement for patient consent, these limitations hinder the development of accurate and robust machine learning models in the medical field. Generative models offer a promising solution by enabling data augmentation and synthesizing fake data that closely resemble the characteristics of real data. This paper focuses on generating chest X-ray images using a dataset of 31,142 unlabeled images from a clinical center. After training our proposed DCGAN model for 200 epochs, we achieved impressive results; the generated images were visually appealing, demonstrating the effectiveness of our approach, evaluation metrics further supported the quality of the generated images, with an FID score of 49.045, SSIM of 0.4779, and KID of 0.07979. However, it is important to acknowledge the biases and limitations of certain evaluation metrics. The FID score can be biased towards specific datasets, as it relies on feature embeddings extracted from a pre-trained Inception network. Therefore, domain-specific biases can influence the FID score. Similarly, the Inception Score (IS) is widely criticized for its unreliability and failure to capture the quality and realism of generated samples. To address these concerns, relying on other metrics that are not biased, such as the KID score, is crucial. To gain a comprehensive understanding of the generated and real data distributions, we depicted them in both 3D and 2D t-distributed Stochastic Neighbor Embedding (t-SNE) spaces. These visualizations provide insights into the distribution patterns and facilitate intuitive comparisons between the generated and real datasets