Smoothed Online Convex Optimization with Delayed Feedback

Authors: Sifan Yang, Wenhao Yang, Wei Jiang, Yuanyu Wan, Lijun Zhang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conduct experiments to validate the effectiveness and efficiency of our algorithms. In this section, we evaluate the performance of the proposed Smelt-DOGD and efficient Smelt-DOGD methods against an existing algorithm for OCO with delayed feedback, Mild OGD [Wan et al., 2024], through numerical experiments. For the parameters, we set those of Smelt-DOGD and efficient Smelt-DOGD according to Theorem 2 and Theorem 3, respectively.
Researcher Affiliation Academia 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China 2School of Artificial Intelligence, Nanjing University, Nanjing 210023, China 3School of Software Technology, Zhejiang University, Ningbo 315100, China EMAIL, EMAIL
Pseudocode Yes Algorithm 1 Smelt-DOGD: Expert-algorithm Algorithm 2 Smelt-DOGD: Meta-algorithm Algorithm 3 Efficient Smelt-DOGD: Expert-algorithm Algorithm 4 Efficient Smelt-DOGD: Meta-algorithm
Open Source Code No The paper does not contain any explicit statements about releasing code or providing links to a code repository.
Open Datasets Yes We implement the online classification on ijcnn1 dataset from LIBSVM Data [Chang and Lin, 2011; Prokhorov, 2001].
Dataset Splits No In each round t [T], a batch of training examples {(xt,1, yt,1) , . . . , (xt,m, yt,m)} arrive... The paper describes an online learning setting where data arrives in batches per round to simulate a changing environment. It does not specify conventional training/test/validation splits for reproduction.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, memory, etc.) used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers used for the experiments.
Experiment Setup Yes In this experiment, we set domain diameter as D = 10 and follow Zhao et al. [2022] to choose the domain W as an ellipsoid W = w Rn | w Ew λmin(E) (D/2)2 , where E is a certain diagonal matrix and λmin denotes its minimum eigenvalue. To simulate the changing environment, we flip the labels of samples every 1000 iterations. For this dataset, dimensionality n = 22 and we set T = 4000, batch size m = 256, G = sqrt(22), λ = 10 and delay dt is selected uniformly at random from [1, 5]. ... We set the dimensionality n = 500, T = 4000, batch size m = 128, tradeoff parameter λ = 10, D = 200 and delay dt is selected uniformly at random from [1, 5]. To simulate the changing environment, we flip w every 1000 iterations.