Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

C2PD: Continuity-Constrained Pixelwise Deformation for Guided Depth Super-Resolution

Authors: Jiahui Kang, Qing Cai, Runqing Tan, Yimei Liu, Zhi Liu

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate state-of-the-art performance and exhibit increasingly prominent advantages with scale escalation, suggesting new directions for further breakthroughs in large-scale tasks such as x32. We conduct experiments on NYU v2 (Wright et al. 2010), Middlebury (Hirschmuller and Scharstein 2007; Scharstein et al. 2014), Lu (Lu, Ren, and Liu 2014), and RGBDD (He et al. 2021).
Researcher Affiliation Academia 1Faculty of Computer Science and Technology, Ocean University of China 2School of Information Science and Engineering, Shandong University EMAIL, EMAIL, EMAIL
Pseudocode No No specific pseudocode or algorithm blocks are provided. The methodology is described using text and mathematical formulas.
Open Source Code No No explicit statement about releasing source code or a link to a code repository is provided in the paper.
Open Datasets Yes We conduct experiments on NYU v2 (Wright et al. 2010), Middlebury (Hirschmuller and Scharstein 2007; Scharstein et al. 2014), Lu (Lu, Ren, and Liu 2014), and RGBDD (He et al. 2021).
Dataset Splits Yes Consistent with prior studies (Kim, Ponce, and Ham 2021; He et al. 2021; Zhao et al. 2022; Zhong et al. 2023a; Wang, Yan, and Yang 2024), we utilize the first 1000 RGB-D pairs from the NYU-v2 dataset for training, with the remaining 449 pairs reserved for validation. Furthermore, the same pretrained model trained on NYUv2 is evaluated on Middlebury (30 pairs), Lu (6 pairs), and RGBDD (405 pairs) datasets.
Hardware Specification Yes The model is implemented using Py Torch (Paszke et al. 2017) and trained on one RTX 3090ti GPU.
Software Dependencies No The paper mentions 'Py Torch (Paszke et al. 2017)' but does not provide a specific version number for this or any other software library. It mentions 'Adam optimizer,' which is an algorithm, not a software dependency with a version.
Experiment Setup Yes During the training phase, we randomly crop 256 x 256 image patches from depths and RGB images as inputs. Following (Zhong et al. 2023a), we augment the training data with random flipping and rotation. Adam optimizer is utilized (Kingma and Ba 2014) with β1 = 0.9 and β2 = 0.999, employing an initial learning rate of 1 x 10^-4. The model is implemented using Py Torch (Paszke et al. 2017) and trained on one RTX 3090ti GPU. Training typically requires two days for the NYU v2 dataset. Additionally, our output channels are set to 32, while DADA uses 64.