RDPA: Real-Time Distributed-Concentrated Penetration Attack for Point Cloud Learning
Authors: Youtong Shi, Lixin Chen, Yu Zang, Chenhui Yang, Cheng Wang
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that the real-time distributed-concentrated penetration attack (RDPA) framework can achieve state-of-the-art (SOTA) success rates by perturbing only 3.5% of points, and have the best penetration for mainstream defense methods such as SRS and SOR. Comprehensive experiments show that the proposed attack framework is capable of providing SOTA results on various benchmarks. We also provide extensive ablation studies to examine the effect and robustness of different parts. |
| Researcher Affiliation | Academia | 1Fujian Key Lab of Sensing and Computing for Smart Cities, School of Informatics, Xiamen University (XMU), China 2Key Laboratory of Multimedia Trusted Perception and Efficient Computing, XMU, China EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods and processes verbally and with diagrams, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code, nor does it provide a link to a code repository. It mentions "In the future, we plan to extend RDPA to real-world applications" which suggests the code is not yet publicly available. |
| Open Datasets | Yes | We take Model Net40 [Wu et al., 2015] and Point Net [Qi et al., 2017a] as the basic experimental dataset and classification network. Additionally, we validate RDPA on Scan Object NN [Uy et al., 2019] setting as in [Kim et al., 2021], which is a high quality real-world dataset and include 15000 objects. |
| Dataset Splits | Yes | Model Net40 comprises 12,311 CAD models across 40 common object categories, where 9,843 objects were allocated for training and 2,468 served as the testing set. |
| Hardware Specification | Yes | We train Disruptor on Py Torch using an NVIDIA RTX 3090 Ti GPU, over 400 epochs with a batch size of 64. |
| Software Dependencies | No | The paper mentions "Py Torch" but does not specify a version number for this or any other software dependency. |
| Experiment Setup | Yes | we employ Adam optimizer and Cosine Annealing LR scheduler, setting the learning rate to 0.001 and the cosine annealing cycle to 20. We train Disruptor on Py Torch using an NVIDIA RTX 3090 Ti GPU, over 400 epochs with a batch size of 64. Within the joint loss terms, we conducted an exhaustive search [Kim et al., 2021] for the parameters in Eq. 10 and empirically set them as α = 5 and β = 50. For the definition of Lmsp, we take γ=1. |