Boost Embodied AI Models with Robust Compression Boundary
Authors: Chong Yu, Tao Chen, Zhongxue Gan
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The efficacy of our method has been proven to find the optimal balance between accuracy, efficiency, and robustness in real-world conditions. Finally, we evaluate our method on typical embodied AI models and benchmarking tasks to show the efficacy of our method for finding the optimal balance between accuracy, efficiency, and robustness in real-world conditions. For the experiments in this section, we choose Py Torch with version 1.9.0 as the framework to train all baseline and efficient neural models. All of the training experimental results are obtained with A100 GPU clusters. The comparison results shown in Tables 1 and 2 show that BRCB can steadily provide a smaller accuracy drop for the compressed models on benign samples than both sparse pruning and quantization prior arts. |
| Researcher Affiliation | Academia | Chong Yu1 , Tao Chen2, , Zhongxue Gan1, 1Academy for Engineering and Technology, Fudan University 2School for Information Science and Technology, Fudan University EMAIL, EMAIL |
| Pseudocode | No | The paper describes the Boost Robust Compression Boundary (BRCB) algorithm and its mechanisms (Against-Corruption Mechanism, Push the Limitation of Robustness Boundary) using descriptive text and flow diagrams (e.g., Figure 4), but it does not include any explicitly structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not explicitly state that the code for the BRCB methodology is open-sourced or provide a link to a code repository. Footnotes 1, 2, and 3 provide links to external projects (BEVFormer, BEVFusion, OpenVLA) that are used as experiment target models, not the implementation of BRCB itself. |
| Open Datasets | No | The paper refers to 'benchmarking tasks' for autonomous driving and robotics and mentions models like BEVFormer, BEVFusion, and Open VLA. It discusses using 'natural or adversarial corrupted samples' and 'collected corrupted images'. However, it does not explicitly name the specific datasets (e.g., NuScenes, KITTI) used for the evaluation or provide concrete access information (links, citations) to those datasets. |
| Dataset Splits | No | The paper discusses using 'benign training dataset' and 'real or generated corrupted dataset' for model compression and evaluation. However, it does not provide specific details on how these datasets are split into training, validation, or test sets (e.g., specific percentages or sample counts), nor does it reference any standard predefined splits with citations. |
| Hardware Specification | Yes | All of the training experimental results are obtained with A100 GPU clusters. The BRCB-compressed models were evaluated on the NVIDIA DRIVE AGX Orin platform. |
| Software Dependencies | Yes | For the experiments in this section, we choose Py Torch with version 1.9.0 as the framework to train all baseline and efficient neural models. |
| Experiment Setup | No | The paper describes the methodology of the Boost Robust Compression Boundary (BRCB) algorithm and its evaluation. While it details the architecture and mechanisms, it does not explicitly provide concrete hyperparameter values such as learning rates, batch sizes, or the number of training epochs in the main text. |