Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
OmniSR: Shadow Removal Under Direct and Indirect Lighting
Authors: Jiamin Xu, Zelong Li, Yuxin Zheng, Chenyu Huang, Renshu Gu, Weiwei Xu, Gang Xu
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments show that our method outperforms state-of-the-art shadow removal techniques and can effectively generalize to indoor and outdoor scenes under various lighting conditions, enhancing the overall effectiveness and applicability of shadow removal methods. |
| Researcher Affiliation | Academia | Jiamin Xu1, Zelong Li1, Yuxin Zheng1, Chenyu Huang1, Renshu Gu1, Weiwei Xu2, Gang Xu1* 1Hangzhou Dianzi University 2Zhejiang University EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the network architecture and attention mechanisms in detail but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks with structured steps. |
| Open Source Code | Yes | Code https://blackjoke76.github.io/Projects/Omni SR/ |
| Open Datasets | Yes | We conducted our experiments on ISTD (Wang, Li, and Yang 2018), ISTD+ (Le and Samaras 2019), SRD (Qu et al. 2017), WRSD+ (Vasluianu, Seizinger, and Timofte 2023)), and the proposed INS dataset. ... The 3DFront dataset (Fu et al. 2021a,b), ABO (Collins et al. 2022) and Objaverse (Deitke et al. 2023) datasets within an empty environment. |
| Dataset Splits | Yes | The dataset includes 30,000 training and 2,000 testing images, all with a resolution of 512 512. The training and testing images are generated from distinct scenes with different objects and materials. ... Our training data includes 20,000 image pairs from the 3DFront scenes and 10,000 image pairs from the object composition scenes. |
| Hardware Specification | Yes | Our model is trained on a GPU server with four Ge Force RTX 4090 GPUs using Py Torch 2.0.1 (Paszke et al. 2017) with CUDA 11.7. |
| Software Dependencies | Yes | Our model is trained on a GPU server with four Ge Force RTX 4090 GPUs using Py Torch 2.0.1 (Paszke et al. 2017) with CUDA 11.7. |
| Experiment Setup | Yes | We employ the Adam optimizer (Kingma and Ba 2015) for training. The initial learning rate is set to 2 10 4 and adjusted using a cosine annealing scheduler (Loshchilov and Hutter 2016). Additional details can be found in the supplementary. ... ϵ = 10 3 is a constant in all the experiments. |