LAGD: Local Topological-Alignment and Global Semantic-Deconstruction for Incremental 3D Semantic Segmentation
Authors: Yumin Zhang, Haoran Duan, Rui Sun, Yue Cheng, Tejal Shah, Rajiv Ranjan, Bo Wei
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results illustrate the superiority of our proposed LAGD. Comprehensive experiments on two classic 3D semantic segmentation benchmarks: S3DIS (Armeni et al. 2016) and Scan Net (Dai et al. 2017) illustrate the effectiveness of our proposed LAGD. |
| Researcher Affiliation | Academia | 1School of Computing, Newcastle University, United Kingdom 2School of Software Engineering, Beijing Jiaotong University, China EMAIL, EMAIL |
| Pseudocode | No | The paper describes the proposed method, LAGD, and its components (ITA and SDD) in detail within the 'Proposed Method' section, including equations and descriptive text, but does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Two representative 3D semantic segmentation datasets are leveraged to conduct comparison experiments. Stanford 3D Indoor Spaces (S3DIS) (Armeni et al. 2016) is a large-scale benchmark for indoor scene understanding. ... Scan Net (Dai et al. 2017) is another challenging 3D segmentation benchmark. |
| Dataset Splits | Yes | Specifically, in the short-term setting, we follow the setting in GUAT (Yang et al. 2023a) which includes Cnovel = {5, 3, 1} in both S3DIS and Scan Net datasets, and each has a random state split M0 and an alphabetical split M1. Besides, we set Cnovel = {1, 1, , 1}, which consists of total 11 steps on Scan Net to evaluate model performance under long-term incremental states. Comparison results are summarized in Tables 1, 2, and 3, respectively. |
| Hardware Specification | Yes | Experiments are conducted on Tesla-V100. |
| Software Dependencies | No | The paper mentions using PointNet++ and Adam optimizer but does not specify version numbers for these or any other software libraries or frameworks. |
| Experiment Setup | Yes | Specifically, for S3DIS, ... we set the batch size as 32, the learning rate as 0.001, and the hyper-parameters {α, β, γ} are set as {10, 1, 1}. Each incremental state is trained 32 epochs, and utilizes the Adam optimizer (Kingma and Ba 2014) with initial 0.001 learning rate. For Scan Net, ... we set the batch size as 32, the learning rate is 0.001, and the hyper-parameters {α, β, γ} are set as {10, 1, 1}, and each incremental state is trained 300 epochs. We leverage Adam optimizer with an initial 0.001 learning rate and the decay factor is set as 0.7 for 50 epochs. |