Regularized Proportional Fairness Mechanism for Resource Allocation Without Money

Authors: Sihan Zeng, Sujay Bhatt, Alec Koppel, Sumitra Ganesh

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We numerically evaluate RPF-Net i) as the number of agents and resources changes; and ii) when the mechanisms are tested on a distribution different from that observed during training. We also visualize the decision boundary of RPF-Net in contrast to the PF mechanism to provide more insight on how RPF-Net deviates from the PF mechanism which it is designed to approximate and enhance.
Researcher Affiliation Industry Sihan Zeng EMAIL JPMorgan AI Research Sujay Bhatt EMAIL JPMorgan AI Research Alec Koppel EMAIL JPMorgan AI Research Sumitra Ganesh EMAIL JPMorgan AI Research
Pseudocode Yes Algorithm 1: Training RPF-Net Input: Initial network parameter ω[0], dual variables {γ[0] i }N i=1, training dataset {(vl, xl, bl)}L l=1, batch size s, training iterations K, primal and dual learning rate α, β Output: Network parameter ω[K].
Open Source Code No The paper does not provide an explicit statement about releasing code or a link to a code repository for RPF-Net. It mentions other mechanisms (Regret Net, Ex S-Net) in the context of related work but not its own implementation code.
Open Datasets No In all experiments, the true values and demands follow uniform and Bernoulli uniform distributions, respectively, within the range [0.1, 1]. Specifically, we generate the test samples according to vi,m Unif(0.1, 1) , exi,m Unif(0.1, 1) , bxi,m Bern(0.5) , xi,m = exi,mbxi,m. Training data is sampled from the same distributions as test data according to (17) except in Sec. 6.2 which studies distribution mismatch.
Dataset Splits No The paper describes how data is generated for training and testing and mentions 'training dataset {(vl, xl, bl)}L l=1' and 'batch size s'. However, it does not provide specific percentages or absolute counts for training, validation, or test splits. It states 'Training data is sampled from the same distributions as test data' without detailing the sizes of these sets or how they are split from a larger pool.
Hardware Specification No The paper does not provide specific details about the hardware used to run experiments, such as GPU or CPU models. While it discusses computational complexity and simulation times, it does not specify the underlying hardware environment.
Software Dependencies No The paper does not mention any specific software dependencies or library versions used for implementation or experimentation (e.g., Python, PyTorch, TensorFlow, specific solvers with versions).
Experiment Setup Yes Algorithm 1: Training RPF-Net Input: Initial network parameter ω[0], dual variables {γ[0] i }N i=1, training dataset {(vl, xl, bl)}L l=1, batch size s, training iterations K, primal and dual learning rate α, β Output: Network parameter ω[K]. All agent weights are set to 1.