Interval Selection with Binary Predictions

Authors: Christodoulos Karavasilis

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conclude with some experimental results on real-world data that complement our theoretical findings, and show the benefit of prediction algorithms for online interval selection, even in the presence of high error.
Researcher Affiliation Academia Christodoulos Karavasilis University of Toronto EMAIL
Pseudocode Yes Algorithm 1 Naive Algorithm 2 Revoke-Unit Algorithm 3 LR Algorithm 4 Revoke-Proportional
Open Source Code No The paper does not explicitly state that the authors' source code for the methodology is available, nor does it provide a link to a code repository.
Open Datasets Yes We use real-world data from scheduling jobs on parallel machines to test our algorithms. More information on the handling of these datasets can be found in a study by [Feitelson et al., 2014]. We focus on two datasets, NASA-i PSC (18,239 jobs) and CTC-SP2 (77,222 jobs). 3https://www.cs.huji.ac.il/labs/parallel/workload/
Dataset Splits No The paper states, "For every algorithm we average its performance over random permutations of the input instance, for multiple error values." This indicates data shuffling for evaluation but does not specify distinct training, validation, or test dataset splits with percentages or counts for reproducibility.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments (e.g., CPU, GPU models, memory).
Software Dependencies No The paper does not mention any specific software dependencies with version numbers used for the experimental setup (e.g., programming languages, libraries, or frameworks).
Experiment Setup No The paper describes conceptual aspects of the algorithms and their parameters (like λ), but it does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, number of epochs) or system-level training settings.