Efficient Causal Decision Making with One-sided Feedback

Authors: Jianing Chu, Shu Yang, Wenbin Lu, PULAK GHOSH

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments and a real-world data application demonstrate the empirical performance of our proposed methods.
Researcher Affiliation Collaboration Jianing Chu Amazon EMAIL Shu Yang & Wenbin Lu Department of Statistics North Carolina State University EMAIL Pulak Ghosh Indian Institute of Management EMAIL
Pseudocode No The paper describes methods and theoretical proofs using mathematical notation and text, but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about open-sourcing code or a link to a code repository.
Open Datasets No A simulated dataset based on the real data is available upon request.
Dataset Splits Yes We consider samples with size n = 1000, 2000. ... We randomly sample the training data with a size 3000 and 5000. The proposed efficient estimator over the entire dataset is used as the testing value.
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU/CPU models) used for running the experiments.
Software Dependencies No The paper mentions using 'random forest (RF) models', 'generalized additive model (GAM)', and a 'tree-based classification algorithm' but does not provide specific version numbers for these software components or libraries.
Experiment Setup Yes We consider a correctly specified logistic regression model for φ(η). We obtain bηnaive using g(x; η) = (1, x1, x2, x3)T . Specifically, in case 1, all the regressions with pseudo-outcomes are using random forest (RF) models. In case 2, we estimate P(Y = 1 | X, A = 1) using a generalized additive model (GAM). For the DR estimator, we estimate w(x) using GAM in both cases. We estimate E(y | x) using RF in case 1 and using GAM in case 2. ... We use a tree-based classification algorithm introduced in Zhou et al. (2023) and focus on depth-2 decision trees for illustration.