Dual-branch Graph Feature Learning for NLOS Imaging

Authors: Xiongfei Su, Tianyi Zhu, Lina Liu, Zheng Chen, Yulun Zhang, Siyuan Li, Juntian Ye, Feihu Xu, Xin Yuan

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments demonstrate that our method attains the highest level of performance among existing methods across synthetic and real data. This section details the implementation of our algorithm, presents simulation and real data results to show DGNLOS s superiority, and includes an ablation study for module evaluation.
Researcher Affiliation Collaboration 1Zhejiang University, Hangzhou, China, e-mail: EMAIL 2China Mobile Research Institute, Beijing, China 3Westlake University, Hangzhou, China 4Shanghai Jiao Tong University, Shanghai, China 5University of Science and Technology of China, Anhui, China
Pseudocode No The paper describes the proposed method through text and figures (e.g., Figure 3), but it does not contain any structured pseudocode or algorithm blocks. It mentions 'Algorithm 2' from a different paper, but the algorithm itself is not present in this document.
Open Source Code No The paper does not explicitly provide concrete access to source code through a specific repository link, an explicit code release statement, or code in supplementary materials.
Open Datasets Yes To ensure a fair comparison with existing methods, the training dataset, downloaded from a Google Drive link, is identical to that of LFE (Chen et al. 2020). The real data includes 6 scenes provided in (Lindell, Wetzstein, and O Toole 2019).
Dataset Splits No The paper mentions a 'training dataset consists of 3000 generated measurements' and 'The real data includes 6 scenes,' but it does not specify any training/test/validation dataset splits (e.g., percentages, sample counts, or explicit splitting methodology) for reproducibility.
Hardware Specification Yes We utilize a Nvidia Ge Force RTX 3090 to train and test the proposed model.
Software Dependencies Yes The proposed method is implemented by the pytorch 1.7.
Experiment Setup Yes The models are trained using Adam (Kingma and Ba 2014) with initial learning rate as 8e 4, which is gradually reduced to 1e 6 with cosine annealing (Loshchilov and Hutter 2016). Albedo branch is first trained on 8 samples for 150 epochs and depth branch is trained on 8 samples for 80 epochs.