Semi-Supervised Semantic Image Segmentation by Deep Diffusion Models and Generative Adversarial Networks

Loading...
Thumbnail Image

Identifiers

Publication date

Reading date

Collaborators

Advisors

Tutors

Editors

Journal Title

Journal ISSN

Volume Title

Publisher

World Scientific

Metrics

Google Scholar

Share

Research Projects

Organizational Units

Journal Issue

Abstract

Typically, deep learning models for image segmentation tasks are trained using large datasets of images annotated at the pixel level, which can be expensive and highly time-consuming. A way to reduce the amount of annotated images required for training is to adopt a semi-supervised approach. In this regard, generative deep learning models, concretely Generative Adversarial Networks (GANs), have been adapted to semi-supervised training of segmentation tasks. This work proposes MaskGDM, a deep learning architecture combining some ideas from EditGAN, a GAN that jointly models images and their segmentations, together with a generative diffusion model. With careful integration, we find that using a generative diffusion model can improve EditGAN performance results in multiple segmentation datasets, both multi-class and with binary labels. According to the quantitative results obtained, the proposed model improves multi-class image segmentation when compared to the EditGAN and DatasetGAN models, respectively, by 4.5% and 5.0%. Moreover, using the ISIC dataset, our proposal improves the results from other models by up to 11% for the binary image segmentation approach.

Description

Bibliographic citation

ose Angel Diaz-Frances et al., Semi-supervised semantic image segmen-tation by deep diffusion models and generative adversarial networks, Inter-national Journal of Neural Systems, doi: 10.1142/S0129065724500576

Collections

Endorsement

Review

Supplemented By

Referenced by

Creative Commons license

Except where otherwised noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 Internacional