OCCAM: Towards Cost-Efficient and Accuracy-Aware Classification Inference

Authors: Dujian Ding, Bicheng Xu, Laks Lakshmanan

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On a variety of real-world datasets, OCCAM achieves 40% cost reduction with little to no accuracy drop.
Researcher Affiliation Academia Dujian Ding, Bicheng Xu, Laks V. S. Lakshmanan University of British Columbia EMAIL EMAIL
Pseudocode Yes Algorithm 1: OCCAM Algorithm. Input: test query batch X; ML classifiers f1, f2, , f M and costs c1, c2, , c M; query samples S1, S2, ..., Sk; user cost budget B. Output: optimal model portfolio µ : X [M].
Open Source Code Yes Codes are available in https://github.com/Dujian Ding/OCCAM.git.
Open Datasets Yes We consider 4 widely studied datasets for image classification: CIFAR-10 (10 classes) (Krizhevsky et al., 2009), CIFAR-100 (100 classes) (Krizhevsky et al., 2009), Tiny Image Net (200 classes) (CS231n), and Image Net-1K (1000 classes) (Russakovsky et al., 2015).
Dataset Splits Yes We randomly sample 20, 000 images from the training set as our validation set, and we use the remaining 30, 000 images to train our models. (from CIFAR-10 description in C.1).
Hardware Specification Yes All experiments are conducted with one NVIDIA V100 GPU of 32GB GPU RAM.
Software Dependencies No The paper mentions the Adam optimizer and Hi GHS ILP solver, but does not provide specific version numbers for these software components or any other libraries like PyTorch.
Experiment Setup Yes For all seven models, we use the Adam optimizer (Kingma & Ba, 2015) with β1 = 0.9 and β2 = 0.999, constant learning rate 0.00001, and a batch size of 500 for training. Models are trained till convergence.