HARE: Human-in-the-Loop Algorithmic Recourse

Authors: Sai Srinivas Kancheti, Rahul Vigneswaran, Bamdev Mishra, Vineeth N. Balasubramanian

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform experiments on 3 benchmark datasets on top of 6 popular baseline recourse methods where we observe that our framework performs significantly better on simulated user preferences.
Researcher Affiliation Collaboration Sai Srinivas Kancheti EMAIL Indian Institute of Technology Hyderabad, India Rahul Vigneswaran EMAIL Indian Institute of Technology Hyderabad, India Bamdev Mishra EMAIL Microsoft, India Vineeth N Balasubramanian EMAIL Indian Institute of Technology Hyderabad, India
Pseudocode Yes Algorithm 1: Actionable Sampling Algorithm 2: Boundary Point Search Algorithm 3: Final Candidate Recourses Algorithm 4: HARE
Open Source Code Yes Our code is publicly available . https://github.com/rahulvigneswaran/HARE
Open Datasets Yes Datasets. We evaluate on 3 commonly used binary datasets spanning different application domains including credit worthiness, criminal recidivism, and income prediction, which are popularly used in recourse literature. Adult Income Becker & Kohavi (1996) is a binary classification dataset... Give Me Some Credit Kaggle (2021) is used to predict credit worthiness... Finally we consider COMPAS Larson et al. (2016)
Dataset Splits No The paper mentions using a 'test-set' for recourse generation and '150 fixed individual samples taken from the test-set', but does not explicitly state the training/validation/test split percentages or methodology for the datasets used to train the classifiers.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or cloud computing specifications used for running the experiments.
Software Dependencies No The paper mentions using 'CARLA Pawelczyk et al. (2021)' for recourse generator implementations and the 'Adam Kingma & Ba (2014) optimizer', but it does not specify version numbers for these or other software libraries/dependencies.
Experiment Setup Yes We have a total budget of B = 30 user queries... For Actionable Sampling, we perform full-batch gradient descent using the Adam Kingma & Ba (2014) optimizer for n = 100 iterations with a learning-rate of 0.1. We set the magnitude hyperparameter γ to 1, and the regularization hyperparameter λ to 10. In Boundary Point Search the tolerance value ϵ is set to 1e 06. All experimental results are averaged over 5 seeds to ensure robustness.