Learning to Help in Multi-Class Settings
Authors: Yu Wu, Yansong Li, Zeyu Dong, Nitya Sathyavageeswaran, Anand Sarwate
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our proposed methods offer an efficient and practical solution for multi-class classification in resource-constrained environments. |
| Researcher Affiliation | Academia | Rutgers University, University of Illinois Chicago, Stony Brook University |
| Pseudocode | Yes | Algorithm 1 Optimization With Our Surrogate Loss Function |
| Open Source Code | No | The paper does not provide an explicit statement about open-sourcing code or a link to a code repository. No mention of code in supplementary materials. |
| Open Datasets | Yes | In this section, we test the proposed surrogate loss function in equation 7 and algorithms for different settings on CIFAR-10 (Krizhevsky & Hinton, 2009), SVHN (Netzer et al., 2011) , and CIFAR-100 (Krizhevsky & Hinton, 2009) datasets. |
| Dataset Splits | Yes | CIFAR-10 consists of 32 × 32 color images drawn from 10 classes and is split into 50000 training and 10000 testing images. |
| Hardware Specification | Yes | The experiments are conducted in RTX 3090. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | To reduce the computation, we use Stochastic Gradient Descent (SGD) for presentation. Specifically, we choose ce from an interval between [0, 0.5] with fixed inaccuracy costs c1 = 1 and c1 = 1.25. In our experiments, the base network structure for the client classifier and the rejector is Le Net-5, and the server classifier is either Alex Net or Vi T. |