Safe-EF: Error Feedback for Non-smooth Constrained Optimization

Authors: Rustem Islamov, Yarden As, Ilyas Fatkhullin

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments in a reinforcement learning setup, simulating distributed humanoid robot training, validate the effectiveness of Safe-EF in ensuring safety and reducing communication complexity. Extensive experiments and ablation studies of Safe-EF, putting the method to the test on a challenging task of distributed humanoid robot training and providing important practical insights into the performance of non-smooth EF methods.
Researcher Affiliation Academia 1University of Basel, Switzerland 2ETH Zürich, Switzerland 3ETH AI Center, Switzerland. Correspondence to: Rustem Islamov <EMAIL>.
Pseudocode Yes Algorithm 1 Safe-EF with bidirectional compression
Open Source Code Yes For more specific details, please use our open-source implementation https://github.com/yardenas/safe-ef.
Open Datasets No The paper mentions using a
Dataset Splits No The paper mentions using a 'batch size Nfv = 1024' and 'a batch of 128 trajectories' which are training parameters. It also describes a synthetic data generation process. However, it does not specify explicit training, validation, or test dataset splits for any of the experiments (synthetic, Humanoid, Cartpole, or Neyman-Pearson classification).
Hardware Specification No The paper does not provide specific details about the hardware used, such as GPU or CPU models, or memory specifications. It generally refers to 'distributed humanoid robot training' but without hardware specifics.
Software Dependencies No The paper mentions several software components like 'PPO (Schulman et al., 2017)', 'Adam as optimizer (Kingma & Ba, 2014)', and 'Brax (Freeman et al., 2021)'. However, it does not provide specific version numbers for these software dependencies, which are required for a reproducible description.
Experiment Setup Yes Unless specified otherwise, in all our experiments, the default number of workers is n = 16, compression ratio is K/d = 0.1 with Top-K compression. We parameterize a neural network policy with d = 0.2M parameters and use a batch size Nfv = 1024 to evaluate fi and gi. We keep the default value γ = 0.0003, with Adam as optimizer (Kingma & Ba, 2014). The only deviation from these parameters is the entropy regularization coefficient, which we set to 0.01 from 0.001. Table 1: The algorithms hyperparameters used in the training from Section 6.1.