CONGO: Compressive Online Gradient Optimization
Authors: Jeremy Carleton, Prathik Vijaykumar, Divyanshu Saxena, Dheeraj Narasimha, Srinivas Shakkottai, Aditya Akella
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical simulations and realworld microservices benchmarks demonstrate CONGO s superiority over gradient descent approaches that do not account for sparsity. |
| Researcher Affiliation | Academia | 1 Texas A&M University, 2 The University of Texas at Austin, 3 Inria |
| Pseudocode | Yes | Algorithm 1 CONGO-E: Compressive Online Gradient Optimization Efficient Version |
| Open Source Code | Yes | Code available at https://github.com/5-Jeremy/CONGO |
| Open Datasets | Yes | We utilize the Social Network application from the Death Star Bench suite (Gan et al., 2019) which represents a small-scale social media platform with various request types such as compose-post , read-user-timeline , and read-home-timeline . |
| Dataset Splits | No | The paper describes simulation scenarios and workload patterns (e.g., fixed workload, variable arrival rate, variable job type) rather than explicit training/test/validation splits for static datasets. While the PPO agent baseline mentions 30 training and 30 testing iterations, this refers to the training/testing of the RL agent itself, not a dataset split for the overall methodology. |
| Hardware Specification | Yes | The numerical simulations were run on a machine with an Intel core i7 processor and an NVIDIA Ge Force RTX 3050 Ti Laptop GPU. CPU: AMD Ryzen Threadripper 3960X 24-Core Processor CPU: Intel(R) Core(TM) i9-9940X CPU @ 3.30GHz 2 x NVIDIA Ge Force RTX 2080 Ti |
| Software Dependencies | No | The paper mentions several software components like 'Numpy', 'pyproximal', 'queueing-tool (Jordon, 2023)', 'scipy optimize Python package', and 'wrk2 tool', but it does not specify version numbers for any of these. |
| Experiment Setup | Yes | Table 1: Hyperparameters for Numerical Experiments Table 6: Glossary of Hyperparameters for Deathstar Bench Trials Table 7: NSGD Hyperparameters for Deathstar Bench Trial Table 8: SGDSP Hyperparameters for Deathstar Bench Trial Table 9: Proximal Policy Optimization (PPO) for Deathstar Bench Trial Table 10: CONGO B Hyperparameters for Deathstar Bench Trial Table 11: CONGO Z Hyperparameters for Deathstar Bench Trial Table 12: CONGO E Hyperparameters for Deathstar Bench Trial |