On Efficient Estimation of Distributional Treatment Effects under Covariate-Adaptive Randomization
Authors: Undral Byambadalai, Tomu Hirata, Tatsushi Oka, Shota Yasui
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Simulation studies and empirical analyses of microcredit programs highlight the practical advantages of our method. |
| Researcher Affiliation | Collaboration | 1Cyber Agent, Inc., Tokyo, Japan 2Databricks Japan, Inc., Tokyo, Japan 3Department of Economics, Keio University, Tokyo, Japan. |
| Pseudocode | Yes | Algorithm 1 ML regression-adjusted DTE estimator with cross-fitting |
| Open Source Code | Yes | The replication code is publicly available at https://github.com/Cyber Agent AILab/dte car, and the method can be implemented using the Python library dte-adj (https://pypi.org/project/dte-adj/). |
| Open Datasets | Yes | The dataset from the field experiment conducted by Attanasio et al. (2015) is available for download at Open ICPSR (Project 113597, Version V1). |
| Dataset Splits | Yes | Input: Data {(Yi, Wi, Xi, Si)}n i=1 split randomly into L roughly equal-sized folds (L > 1); M a supervised learning algorithm for level y Y do for (treatment w W, stratum s S, fold ℓ=1 to L) do Train M on data excluding fold ℓ, using observations in treatment group w within stratum s. Use M to obtain predictions ˆµw(y, Si, Xi) for all observations in stratum s for fold ℓ. end for |
| Hardware Specification | Yes | All experiments were carried out on a Mac Book Pro equipped with an Apple M3 Pro chip and 36GB memory. |
| Software Dependencies | No | The paper mentions "Python library dte-adj" and the use of "linear regression and gradient boosting", but no specific version numbers for these software components are provided in the text. |
| Experiment Setup | No | The paper mentions using "linear regression and gradient boosting" and "2-fold cross-fitting" or "10-fold cross-fitting" for regression adjustments. However, it does not specify any concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) for these models, which are crucial for reproducing the experimental setup. |