Reveal Object in Lensless Photography via Region Gaze and Amplification

Authors: Xiangjun Yin, Huihui Yue

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the exciting performance of our method. Our codes will be released at https://github.com/YXJ-NTU/Lensless-COD.
Researcher Affiliation Academia Xiangjun Yin1 Huihui Yue2 1Centre for Integrated Circuits and Systems (CICS), Nanyang Technological University, Singapore 2School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore EMAIL, EMAIL
Pseudocode No The paper describes the methodology using text and visual diagrams (e.g., Figure 2: "Overview of our RGANet"), but it does not include a dedicated section or block explicitly labeled as "Pseudocode" or "Algorithm" with structured, code-like steps.
Open Source Code Yes Our codes will be released at https://github.com/YXJ-NTU/Lensless-COD.
Open Datasets Yes Furthermore, we contribute the first relevant dataset as a benchmark to prosper the lensless imaging community. We contribute corresponding datasets as benchmarks and extensive experiments demonstrate that our method can accurately detect concealed objects from lensless imaging measurements. The simulated data is collected from four famous COD datasets, including CAMO (Trung-Nghia et al. (2019)), CHAMELEON (Przemysław), COD10K (Fan et al. (2022)), and NC4K (Lv et al. (2021)).
Dataset Splits Yes We split the formed dataset into multiple datasets for training and testing. For training, we randomly select 2060 pairs from DLCOD and merge them with SLCOD to generate a training set containing 3917 paired data. For testing, we divide the remaining pairwise data of DLCOD into two datasets, i.e., Test-Easy with 220 paired data and Test-Hard with 320 paired data, according to the difficulty of double-checking.
Hardware Specification Yes All experiments are conducted on a Linux 20.04 server with an NVIDIA GTX 3090, utilizing Py Torch 1.8.0.
Software Dependencies Yes All experiments are conducted on a Linux 20.04 server with an NVIDIA GTX 3090, utilizing Py Torch 1.8.0.
Experiment Setup Yes The ADAM optimizer is used for training with a cosine learning rate scheduling policy defined as lr = 0.5 init r (1 + cos(π epoch/max epoch)). Here, the learning rate lr is initialized with init r = 5 10 4, and the total training epoch is set to max epoch = 100 with epoch ranging from 1 to max epoch. The batch size is configured as 8.