Algorithmic Recourse for Long-Term Improvement

Authors: Kentaro Kanamori, Ken Kobayashi, Satoshi Hara, Takuya Takagi

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrated that our approaches could assign improvement-oriented actions to more instances than the existing methods.
Researcher Affiliation Collaboration 1Fujitsu Limited, Japan 2Institute of Science Tokyo, Japan 3The University of Electro-Communications, Japan. Correspondence to: Kentaro Kanamori <EMAIL>.
Pseudocode Yes Algorithm 1 presents our algorithm for Problem 3.1 based on the CLB... Algorithm 2 presents our algorithm for Problem 3.1 based on the CBO with the Bo W forest.
Open Source Code Yes All the code was implemented in Python 3.10 and is available at https://github. com/kelicht/arlim.
Open Datasets Yes We used three real-world datasets: Credit (N = 30000, D = 13) (Yeh & hui Lien, 2009), Diabetes (N = 769, D = 8) (Dua & Graff, 2017), and COMPAS (N = 6167, D = 9) (Angwin et al., 2016). ... All the datasets used in our experiments are publicly available and do not contain any identifiable information or offensive content.
Dataset Splits Yes We randomly split the dataset S = {(xn, yn)}N n=1 into the training set Str, recourse set Sre, and test set Ste with a ratio of 2 : 1 : 1.
Hardware Specification Yes All the experiments were conducted on mac OS Sequoia with Apple M2 Ultra CPU and 128 GB memory.
Software Dependencies Yes All the code used in our experiments was implemented in Python 3.10 with scikit-learn 1.5.2.
Experiment Setup Yes For both Lin UCB and Bw OUCB, we set m = 10. We also set λ = 20.0 for Lin UCB and B = 50 for Bw OUCB, respectively. ... We used the ℓ1-norm a 1 as the cost function c and set ν = 1/D for computing the executing probability E.