LIBA: Language Instructed Multi-granularity Bridge Assistant for 3D Visual Grounding

Authors: Yuan Wang, Ya-Li Li, W U Eastman Z Y, Shengjin Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on Scan Refer and Nr3D/Sr3D benchmarks substantiate the superiority of our LIBA, trumping state-of-the-arts by a considerable margin. Experiments Quantitative Comparisons Performance on the Scan Refer dataset. As shown in Table 1, our LIBA method outperforms all competitors by a significant margin across all test subsets.
Researcher Affiliation Academia Yuan Wang1,2, Ya-Li Li1,2, W U Eastman Z Y1,2, Shengjin Wang1,2* 1Department of Electronic Engineering, Tsinghua University, China 2Beijing National Research Center for Information Science and Technology (BNRist), China EMAIL, EMAIL
Pseudocode No The paper describes methods in narrative text and architectural diagrams (Figure 1, 2, 3), but does not contain explicitly labeled pseudocode or algorithm blocks with structured steps.
Open Source Code No The paper does not contain any explicit statement about the release of source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes Experiments on Scan Refer and Nr3D/Sr3D benchmarks substantiate the superiority of our LIBA... Scan Refer (Chen, Chang, and Nießner 2020) and Refer It3D (Achlioptas et al. 2020) emerge as pioneers for the 3D-VG task.
Dataset Splits No The paper mentions evaluating on 'test subsets' for Scan Refer and different categories like 'Unique', 'Multiple', 'All', 'Hard', 'VD' for Nr3D/Sr3D benchmarks, which are evaluation subsets rather than explicit training/validation/test dataset splits with percentages or sample counts for reproducibility.
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., GPU models, CPU types, memory) used for conducting the experiments.
Software Dependencies No The paper mentions various models and techniques such as BERT, DeBERTa, PointNet++, and LoRA, but does not provide specific version numbers for any software dependencies, libraries, or frameworks used in the implementation (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup No The paper describes the proposed modules and loss functions, but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs), optimizer configurations, or other system-level training settings required for reproducibility.