Training Matting Models Without Alpha Labels
Authors: Wenze Liu, Zixuan Ye, Hao Lu, Zhiguo Cao, Xiangyu Yue
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on AM-2K and P3M-10K dataset show that our paradigm achieves comparable performance with the fine-label-supervised baseline, while sometimes offers even more satisfying results than human-labeled ground truth. |
| Researcher Affiliation | Academia | Wenze Liu1, Zixuan Ye2, Hao Lu2 , Zhiguo Cao2, Xiangyu Yue1* 1 MMLab, The Chinese University of Hong Kong 2 School of Artificial Intelligence and Automation, Huazhong University of Science and Technology |
| Pseudocode | No | The paper describes methods and loss functions using mathematical notation and textual descriptions, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/poppuppy/alpha-free-matting |
| Open Datasets | Yes | Experiments on AM-2K and P3M-10K dataset show that our paradigm achieves comparable performance with the fine-label-supervised baseline, while sometimes offers even more satisfying results than human-labeled ground truth. Dataset. Affected by domain gap, models trained on synthetic data (Composition-1K (Xu et al. 2017), Distinct 646 (Qiao et al. 2020), etc.) often work poorer in reality. Without requiring fine labels, the proposed method does not rely on data synthesis to produce labels. Hence, we verify the effectiveness of our method directly on natural datasets AM-2K (Li et al. 2022) and P3M-10K (Li et al. 2021). |
| Dataset Splits | No | The paper mentions using "AM-2K (Li et al. 2022) test set" and "P3M-NP-500 test set of P3M-10K (Li et al. 2021)", indicating the use of predefined test sets. However, it does not explicitly provide specific details such as exact split percentages, sample counts for training/validation/test, or a detailed splitting methodology for reproducibility within the provided text. |
| Hardware Specification | No | The paper does not explicitly mention any specific hardware (e.g., GPU models, CPU models, memory details) used for running the experiments. |
| Software Dependencies | No | The paper states, "We choose the Vi TMatte (Yao et al. 2023) as the deep matting model," but does not specify any version numbers for Vi TMatte, Python, PyTorch, CUDA, or other relevant software libraries. |
| Experiment Setup | Yes | In Eq. (8), the window size K is set as 11 by default. For the total loss Eq. (10), λ is set as 10. Other details can be found in the supplementary. |