Adversarially Robust Out-of-Distribution Detection Using Lyapunov-Stabilized Embeddings

Authors: Hossein Mirzaei Sadeghlou, Mackenzie Mathis

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our method through extensive experiments across several benchmarks, demonstrating superior performance, particularly under adversarial attacks. Notably, our approach improves robust detection performance from 37.8% to 80.1% on CIFAR-10 vs. CIFAR-100 and from 29.0% to 67.0% on CIFAR-100 vs. CIFAR-10.
Researcher Affiliation Academia Hossein Mirzaei & Mackenzie W. Mathis École Polytechnique Fédérale de Lausanne (EPFL) EMAIL, EMAIL
Pseudocode Yes The complete algorithmic workflow of AROS can be found in Appendix A2.
Open Source Code Yes Code and pre-trained models are available at https://github.com/Adaptive Motor Control Lab/AROS.
Open Datasets Yes CIFAR-10 or CIFAR-100 (94) served as the ID. Table 2a extends the evaluation to Image Net-1k as the ID, with OOD being comprised of Texture (95), SVHN (96), i Naturalist (97), Places365 (98), LSUN (99), and i SUN (100). ... Datasets used for OSR included CIFAR-10, CIFAR-100, Image Net-1k, MNIST (102), FMNIST (103), and Imagenette (104) (Table 2b).
Dataset Splits Yes An OSR (101) setup was also tested, in which each experiment involved a single dataset that was randomly split into ID (60%) and OOD (40%) subclasses, with results averaged over 10 trials.
Hardware Specification No The paper does not explicitly mention the specific hardware used for running the experiments (e.g., GPU models, CPU types, or memory specifications).
Software Dependencies No The paper mentions using the 'geotorch.orthogonal library' but does not provide specific version numbers for this or any other key software components, nor does it specify the deep learning framework used (e.g., PyTorch, TensorFlow) with its version.
Experiment Setup Yes train it for 200 epochs on classification using PGD10. For the integration of hϕ, an integration time of T = 5 is applied. Training with the loss LSL is performed over 100 epochs. We used SGD as the optimizer, employing a cosine learning rate decay schedule with an initial learning rate of 0.05 and a batch size of 128.