ABNet: Adaptive explicit-Barrier Net for Safe and Scalable Robot Learning
Authors: Wei Xiao, Tsun-Hsuan Wang, Chuang Gan, Daniela Rus
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the efficiency and strength of ABNet in 2D robot obstacle avoidance, safe robot manipulation, and vision-based end-to-end autonomous driving, with results showing much better robustness and guarantees over existing models. 1. Introduction... 4. Experiments: In this section, we conduct several experiments to answer the following questions: ...Benchmark models: We compare with (i) baseline: Tables 1, 2 single end-to-end learning model (E2E) (Levine et al., 2016) and Table 3 single vanilla end-to-end (V-E2E) model (Amini et al., 2022)... Evaluation metrics: The evaluation metrics are defined as follows: mean square error of the model testing (MSE), satisfaction of safety constraints where non-negative values mean safety guarantees (SAFETY), system conservativeness (CONSER.), steering control u1 uncertainty (u1 UNCERTAINTY), acceleration control u2 uncertainty (u2 UNCERTAINTY), and theoretical safety guarantees (THEORET. GUAR.) respectively. |
| Researcher Affiliation | Academia | 1Computer Science and Artificial Intelligence Lab, MIT, USA 2UMass Amherst and MIT-IBM Watson AI Lab, USA. Correspondence to: Wei Xiao <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Construction and training of ABNet Input: the problem setup (a)-(d) given in the problem formulation (Sec. 2). Output: a robust and safe controller u for the system. (a) Formulate each head of explicit-Barriers as in (4). (b) Build the cross connection among explicit-Barriers via pi(z|θ i p),i {1,...,m 1}. (c) Fuse all the heads of explicit-Barriers as in (5). if Incremental training then Decouple pi(z|θ i p),i {1,...,m 1} and define them for each explicit-Barrier. Train each head of explicit-Barriers, respectively. Choose a pi(z|θ i p),i {1,...,m 1} from one of the explicit-Barriers to build cross connection. Fuse all the explicit-Barriers via (6). else Directly train the ABNet via reverse mode error back propagation. end if |
| Open Source Code | Yes | 1Code is available at: https://github.com/Weixy21/ ABNet |
| Open Datasets | Yes | We finally test our models in a more complicated and realistic task: vision-based driving, using an open dataset and benchmark from the VISTA (Amini et al., 2022). ... The dataset is open-sourced including 0.4 million image-control pairs from a closed-road sim-to-real driving field. Static and parked cars of different types and colors are used as obstacles in the dataset. The dataset is collected from the VISTA simulator (Amini et al., 2022). |
| Dataset Splits | No | The paper mentions dataset sizes (e.g., '100 trajectories', '1000 trajectories', '0.4 million image-control pairs') and refers to 'training' and 'testing' phases, but it does not provide specific percentages, exact sample counts, or a detailed methodology for how the datasets were split into training, validation, or test sets for reproducibility. |
| Hardware Specification | Yes | The training time of the ABNet is about 1 hour for 20 epochs on a RTX-3090 computer. ... The training time of the ABNet is about 2 hours for 10 epochs on a RTX-3090 computer. ... The training time of the ABNet is about 15 hours for 5 epochs on a RTX-3090 computer. |
| Software Dependencies | No | The paper mentions using 'Adam as the optimizer', 'MSE loss function', and 'QPFunction from the Opt Net (Amos & Kolter, 2017)', but it does not specify version numbers for these software components or any other libraries used. |
| Experiment Setup | Yes | All the models include fully connected layers of shape [5, 128, 32, 32, 2] with RELU as activation functions. ... We use Adam as the optimizer to train the model with a MSE loss function and a learning rate 0.001. ... All the models include CNN ([[3, 24, 5, 2, 2], [24, 36, 5, 2, 2], [36, 48, 3, 2, 1], [48, 64, 3, 1, 1], [64, 64, 3, 1, 1]]) and LSTM layers (size: 64) and some fully connected layers of shape [32, 32, 2] 2 with RELU as activation functions. The dropout rates for both CNN and fully connected layers are 0.3. |