Importance Sampling for Nonlinear Models

Authors: Prakash Palanivelu Rajmohan, Fred Roosta

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our contributions are supported by both theoretical analyses and experimental results across a variety of supervised learning scenarios. We conduct a series of experiments to demonstrate the effectiveness and versatility of the proposed nonlinear importance scores. First, we evaluate our approach on several benchmarking regression datasets, including California Housing Prices (Pace & Barry, 1997), Medical Insurance dataset (Lantz, 2019), and Diamonds dataset (Wickham & Sievert, 2009). Second, we consider image classification using four standard datasets: SVHN (Street View House Numbers) (Netzer et al., 2011), FER-2013 (Facial Expression Recognition) (Goodfellow et al., 2013), NOTMNIST (Bulatov, 2011), and QD (Quick, Draw) (Ha & Eck, 2018).
Researcher Affiliation Academia 1School of Electrical Engineering and Computer Science, University of Queensland, Brisbane, Australia. 2School of Mathematics and Physics, University of Queensland, Brisbane, Australia. 3ARC Training Centre for Information Resilience (CIRES), Brisbane, Australia. Correspondence to: Prakash P. Rajmohan <EMAIL>, Fred Roosta <EMAIL>.
Pseudocode No The paper describes methods and derivations in paragraph text and mathematical equations, but does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Additional experimental details are provided in Appendix A.5. The code is available here.
Open Datasets Yes First, we evaluate our approach on several benchmarking regression datasets, including California Housing Prices (Pace & Barry, 1997), Medical Insurance dataset (Lantz, 2019), and Diamonds dataset (Wickham & Sievert, 2009). Second, we consider image classification using four standard datasets: SVHN (Street View House Numbers) (Netzer et al., 2011), FER-2013 (Facial Expression Recognition) (Goodfellow et al., 2013), NOTMNIST (Bulatov, 2011), and QD (Quick, Draw) (Ha & Eck, 2018).
Dataset Splits No The paper does not provide specific train/validation/test dataset splits, only mentioning strategies for sampling training instances and converting datasets for specific tasks without detailing how data was partitioned for overall evaluation. For classification tasks, it states: 'We convert the datasets into binary classification tasks, comparing visually similar ( like ) and dissimilar ( unlike ) classes, in SVHN and Not MNIST.'
Hardware Specification No The paper does not provide specific hardware details (such as CPU/GPU models, memory, or cloud resources) used for running its experiments.
Software Dependencies No The paper mentions software components like PyTorch, Adam optimizer, and BCEWithLogitsLoss(), but does not provide specific version numbers for these dependencies.
Experiment Setup Yes Classification Experiments. To carry out the experiment in an under-parameterized setting, the dataset was balanced, and the images were resized to 10 × 10 dimensions with a grayscale background. A fully connected MLP was trained with a linear 100-input layer connected to a hidden layer with 10 neurons and a ReLU activation unit, followed by a sigmoid output transformation function. The optimal weights were computed using PyTorch, with the Adam optimizer for 1000-5000 epochs (depending on the dataset) and BCEWithLogitsLoss().