Fair Training with Zero Inputs
Authors: Wenjie Pan, Jianqing Zhu, Huanqiang Zeng
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that the ZUT framework can improve the performance of multiple state-of-the-art methods in image classification, person reidentification, and semantic segmentation. |
| Researcher Affiliation | Academia | College of Engineering, Huaqiao University, Quanzhou 362021, China EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: ZUT Framework in Classification Input: Image tensors x RB 3 H W , Label GT Parameter: Loss weight α 1: z Tensor.zeros(1, 3, H, W) 2: y Concat(z, x) 3: f Classifier(Network(y)) 4: v, u f[0], f[1 :] 5: L α Lz(v) + Lcls(u, GT) |
| Open Source Code | Yes | Code https://github.com/asd123pwj/ZUT |
| Open Datasets | Yes | Datasets VOC2007 (Everingham et al. 2010) is a multilabel classification dataset... CIFAR100-LT (Cao et al. 2019) is a long-tailed distribution dataset created by performing long-tailed sampling on CIFAR100 (Krizhevsky and Hinton 2009)... PRCC (Yang, Wu, and Zheng 2019) is an imagebased clothes changing person re-identification (CC-Re ID) dataset... Celeb-Re ID (Huang et al. 2019) is another image-based CCRe ID dataset... VCClothes (Wan et al. 2020) is a synthetic image-based CC-Re ID dataset... CCVID (Gu et al. 2022) is a video-based CC-Re ID dataset... Dataset ADE20K (Zhou et al. 2017) is a large-scale semantic segmentation dataset for scene understanding... |
| Dataset Splits | Yes | For the VOC2007 dataset, images are randomly resized and cropped to 256 × 256 during training, with a 50% probability of random horizontal flipping. During testing, images are resized to ensure the shortest side is 256 pixels, followed by a center crop of 256 × 256. For the CIFAR100-LT dataset, training involves random cropping to 32 × 32 with a 4-pixel padding and a 50% probability of random horizontal flipping. ... All experiments are implemented according to CAL (Gu et al. 2022), with training and testing strategies consistent with CAL... All experiments are implemented using MMSegmentation (Contributors 2020). Training involved 80,000 iterations with a batch size of 16. ... During training, images are randomly scaled and cropped to 512 × 512, with a 50% probability of random flipping. For testing, images are scaled to ensure the short side was 512 pixels. |
| Hardware Specification | Yes | All experiments were conducted on RTX 3090 GPU. |
| Software Dependencies | No | All experiments are performed using MMPretrain (Contributors 2023). Each experiment runs for 100 epochs with a batch size of 64. The Adam W (Loshchilov and Hutter 2017) optimizer is employed... All experiments are implemented according to CAL (Gu et al. 2022)... All experiments are implemented using MMSegmentation (Contributors 2020). Adam W (Loshchilov and Hutter 2017) is utilized as the optimizer... |
| Experiment Setup | Yes | All experiments are performed using MMPretrain (Contributors 2023). Each experiment runs for 100 epochs with a batch size of 64. The Adam W (Loshchilov and Hutter 2017) optimizer is employed, with a learning rate set at 1e-4 and a weight decay of 0.3. The learning rate decays by cosine annealing (Loshchilov and Hutter 2016) with a minimum learning rate of 1e-5. The classification loss incorporates label smoothing (Szegedy et al. 2016) at 0.1. ... Training involved 80,000 iterations with a batch size of 16. Adam W (Loshchilov and Hutter 2017) is utilized as the optimizer with a learning rate of 1e-4 and weight decay of 1e-4. The Poly LR scheduler with γ = 0.9 is employed. Cross-entropy is used as the classification loss function. |