Learning Classifiers That Induce Markets

Authors: Yonatan Sommer, Ivri Hikri, Lotan Amit, Nir Rosenfeld

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We now turn to demonstrate how our market-aware strategic learning framework performs empirically on real data with simulated market behavior. We use two datasets common and publicly available datasets and adapt them to our strategic market setting: (i) the adult dataset, showed here, and using capital_gain feature as a proxy for budgets b; and (ii) the folktables dataset, deferred to Appendix B.5.
Researcher Affiliation Academia 1Faculty of Computer Science, Technion Israel Institute of Technology, Haifa, Israel. Correspondence to: Nir Rosenfeld <EMAIL>.
Pseudocode Yes Algorithm 1 Exact empirical market prices
Open Source Code Yes Code is publicly available at https://github.com/BML-Technion/MASC.
Open Datasets Yes We use two datasets common and publicly available datasets and adapt them to our strategic market setting: (i) the adult dataset, showed here, and using capital_gain feature as a proxy for budgets b; and (ii) the folktables dataset, deferred to Appendix B.5. ... The data is publicly available at https: //archive.ics.uci.edu/dataset/2/adult. ... The data is publicly available at https://github.com/socialfoundations/folktables.
Dataset Splits Yes Data splits. We used a train-validation-test split of 70:10:20 and averaged the results over 10 random data splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No All code was implemented in python, and the learning framework was implemented using Pytorch. The paper does not provide specific version numbers for these software components.
Experiment Setup Yes We used the following hyperparameters: Temperature Tsoftsort for the softsort operator: 0.001 Temperature Tsoftmax for the softmax operator: 0.01 Batch size: 500 Learning rate: adult: 0.001 for budget_scale [1, 32], 0.01 for budget_scale [64, 1024] folktables: 0.001 for all budget scales Regularization and coefficient: 0.1 Epochs: 100 for adult, 1000 for folktables