DFCA: Disentangled Feature Contrastive Learning and Augmentation for Fairer Dermatological Diagnostics

Authors: Pengcheng Zhao, Xiaowei Ding

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that DFCA significantly improves both fairness and accuracy compared to state-of-the-art methods. Extensive experiments on two datasets shows that DFCA, by combing disentangled feature contrastive learning and augmentation, improves both fairness and accuracy compared to SOTA methods.
Researcher Affiliation Academia Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China EMAIL
Pseudocode No The paper describes the proposed DFCA framework in detail through textual descriptions and a diagram (Figure 1), but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper states, "We implement our DFCA model by Py Torch," but it does not provide any specific links to a code repository, an explicit statement about code release, or mention code in supplementary materials.
Open Datasets Yes We use two well-known dermatology datasets to evaluate our proposed method: Fitzpatrick-17k dataset [Groh et al., 2021] and DDI dataset [Daneshjou et al., 2022]. Both of the datasets contain skin tone attribute.
Dataset Splits No The paper mentions training for a certain number of epochs and discusses 'in-domain' and 'out-domain' experiments, but it does not provide specific details on how the datasets (Fitzpatrick-17k and DDI) were split into training, validation, and test sets (e.g., percentages, exact counts, or references to predefined splits).
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments, such as specific GPU or CPU models.
Software Dependencies No We implement our DFCA model by Py Torch. The paper mentions PyTorch but does not provide a specific version number for it or any other software dependencies.
Experiment Setup Yes DFCA is trained for 150 epochs firstly with the real datasets and 100 epochs with the mixture of feature augmentation. Our model is trained by Adam optimizer with a learning rate lr = 0.0001. The batch size is 32. The weights are α = 10, β = 0.5 and γ = 1.