Path-Adaptive Matting for Efficient Inference Under Various Computational Cost Constraints

Authors: Qinglin Liu, Zonglin Li, Xiaoqian Lv, Xin Sun, Ru Li, Shengping Zhang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on five image matting datasets demonstrate that the proposed PAM framework achieves competitive performance across a range of computational cost constraints.
Researcher Affiliation Academia 1 Harbin Institute of Technology, Weihai, China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Performance-Aware Path Learning
Open Source Code No The paper discusses implementation details using the Py Torch framework but does not explicitly state that the code is open-source or provide a link to a repository for the described methodology.
Open Datasets Yes Experiments on five image matting datasets demonstrate that the proposed PAM framework achieves competitive performance across a range of computational cost constraints, specifically Adobe Composition-1k (Xu et al. 2017), Distinctions-646 (Qiao et al. 2020), Transparent-460 (Cai et al. 2022), Semantic Image Matting (SIMD) (Sun, Tang, and Tai 2021), as well as the real-world Automatic Image Matting-500 (AIM-500) dataset (Li, Zhang, and Tao 2021).
Dataset Splits No The paper mentions using several datasets for evaluation but does not specify the training, validation, or test splits. It states, 'To avoid overfitting, we follow the data preprocessing methods of previous matting methods to process the train data (Forte and Piti e 2020).'
Hardware Specification Yes We train our PAM framework on an RTX 3090 GPU with a batch size of 4.
Software Dependencies No The proposed method is implemented using the Py Torch framework. (No specific version number provided for PyTorch or other libraries.)
Experiment Setup Yes We train our PAM framework on an RTX 3090 GPU with a batch size of 4. All network weights are initialized using the Kaiming initializer (He et al. 2015). The networks are trained using the Radam optimizer (Liu et al. 2020a) with a weight decay of 3 10 5 and betas of (0.5, 0.999). The initial learning rate is set to 3 10 4 and decays to zero using a cosine annealing scheduler in each stage. In the first stage, we train the entire PAM network for 150 epochs. In the second stage, we perform warm-up training by randomly sampling sub-networks and training them for 20 epochs. In the third stage, we train PAM with the performance-aware path learning strategy for 150 epochs. The other coefficients used in this method are configured as follows: N a = 4, λα = 1, λds = 0.05, λpt = 0.05, λ1 = 1, λcomp = 0.25, λlap = 0.5, ϵ = 10 6, N e = 4, N val = 103, N g = 10, and τ = 1.