Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Active Treatment Effect Estimation via Limited Samples

Authors: Zhiheng Zhang, Haoxiang Wang, Haoxuan Li, Zhouchen Lin

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through simulations and real-world experiments, we show that our method achieves higher estimation accuracy with fewer samples than traditional estimators endowed with asymptotic normality and other estimators backed by finite-sample guarantees. [...] Empirical evaluations on synthetic and real-world datasets demonstrate that RWAS consistently outperforms existing baseline methods, achieving higher estimation accuracy with fewer samples.
Researcher Affiliation Academia 1School of Statistics and Data Science, Shanghai University of Finance and Economics, Shanghai 200433, P.R. China 2School of Mathematical Sciences, Peking University 3Center for Data Science, Peking University 4 State Key Lab of General AI, School of Intelligence Science and Technology, Peking University 5Institute for Artificial Intelligence, Peking University 6Pazhou Laboratory (Huangpu), Guangzhou, China. Correspondence to: Haoxuan Li <EMAIL>, Zhouchen Lin <EMAIL>.
Pseudocode Yes Algorithm 1 RWAS estimator Algorithm 2 IRD, modified from Chen & Price (2019) Algorithm 3 CGAS Algorithm 4 Conflict-Graph-Design (CGD, following Kandiros et al. (2024)) Algorithm 5 GSW design (following Harshaw et al. (2024))
Open Source Code Yes Anonymous code is available at https://github.com/ZHzhang01/ ICML_Finite_sample/settings.
Open Datasets Yes We evaluate the performances of methods on the following real-world datasets: Boston Dataset Harrison Jr & Rubinfeld (1978), IHDP Dataset Multisite (1990); Dorie (2016), Twins Dataset Almond et al. (2005) and La Londe Dataset La Londe (1986).
Dataset Splits Yes Algorithm 1 RWAS estimator: ... Uniformly sample m samples Sm S = [n]/S. | Sm | = m . ... Base on this partition, we can construct an unbiased estimator (line 6-7). Table 2. Upper: Synthetic experiment: Error(sd) of ATE estimations. Prop. = Proportion of samples used in estimation.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or computing infrastructure used for running experiments.
Software Dependencies No The paper does not explicitly list any specific software dependencies or library versions used in the experiments.
Experiment Setup Yes ATE Dataset Denote the sample size as n and set the amount of covariates d = 50. The matrix of covariates X Rn d is generated in three steps. First, generate matrix X Rn d, with each entry sampled from uniform distribution in [0, 0.01] independently. Then, a Gram-Schmidt orthogonalization process is performed on the column space of X to generate an orthogonal matrix Q Rn d satisfying Q Q = Id. Finally, set X = n/10 Q to recover the column norm. The potential outcome vector for the control y(0) is generated uniformly at random from [0, 5], and the individual treatment effect vector t satisfies t = Xb + r, with each element of b Rd be a uniform random number in [0, 1], and r Rn follows a mean zero Gaussian distribution with a standard deviation sd = 0.2. Eventually, y(1) is generated by t = y(1) y(0). The ground truth of ATE is set as τ = 1i [n] ti, with t = (t1, . . . , tn).