Improving Deep Regression with Tightness

Authors: Shihao Zhang, Yuguang Yan, Angela Yao

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experiment on three deep regression tasks: age estimation, depth estimation, and coordinate prediction and compare with Rank Sim (Gong et al., 2022), Ordinal Entropy (OE) (Zhang et al., 2023), and PH-Reg (Zhang et al., 2024). ... Tables 1 and 2 show results on age estimation and depth estimation respectively. ... We conduct the ablation study on Age DB-DIR for age estimations. The results are given in Table 5.
Researcher Affiliation Academia Shihao Zhang1, Yuguang Yan2, Angela Yao1 1National University of Singapore 2Guangdong University of Technology EMAIL EMAIL EMAIL
Pseudocode No The paper describes methods and theoretical analysis in prose, but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Code: https://github.com/needylove/Regression_tightness.
Open Datasets Yes For age estimation, we use Age DB-DIR (Yang et al., 2021)... For depth estimation, we use NYUD2-DIR (Yang et al., 2021)...
Dataset Splits Yes Both Age DB-DIR and NYUD2-DIR contain three disjoint subsets (i.e., Many, Med, and Few) divided from the whole set.
Hardware Specification No The paper mentions training times and memory consumption in Table 6, but does not provide specific hardware details such as GPU or CPU models used for the experiments.
Software Dependencies No The paper does not explicitly state specific software dependencies with version numbers, such as programming languages, libraries, or frameworks (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes For age estimation, we use Age DB-DIR... γ and λ are set to 0.1 and 100, respectively. We set the total target dimension M to be 8 for both tasks. For depth estimation, we use NYUD2-DIR... γ and λ are set to 0.05 and 10, respectively. We set the total target dimension M to be 8 for both tasks. ... We monitor the time and memory consumption for training a model from the beginning to the end with a batch size equal to 128.