Learning Representations of Instruments for Partial Identification of Treatment Effects
Authors: Jonas Schweisthal, Dennis Frauen, Maresa Schröder, Konstantin Hess, Niki Kilbertus, Stefan Feuerriegel
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We further perform extensive experiments to demonstrate the effectiveness across various settings. Overall, our procedure offers a novel path for practitioners to make use of potentially high-dimensional instruments (e.g., as in Mendelian randomization). |
| Researcher Affiliation | Academia | 1 LMU Munich 2 Munich Center for Machine Learning (MCML) 3 School of Computation, Information and Technology, TU Munich 4 Helmholtz Munich. Correspondence to: Jonas Schweisthal <EMAIL>. |
| Pseudocode | Yes | Algorithm 1: Two-stage learner for estimating bounds with complex instruments |
| Open Source Code | Yes | 2Code is available at https://github.com/JSchweisthal/ComplexPartialIdentif. |
| Open Datasets | Yes | We provide results using real-world data from an ADJUVANT chemotherapy study (Liu et al., 2021) as provided in https://github.com/cancer-oncogenomics/minerva-adjuvant-nsclc/tree/v1.0.0. |
| Dataset Splits | Yes | To create the simulated data used in Sec. 6, we sample n = 2000 from the data-generating process above. We then split the data into train (40%), val (20%), and test (40%) sets such that the bounds and deviation can be calculated on the same amount of data for training and testing. |
| Hardware Specification | Yes | Each training run of the experiments could be performed on a CPU with 8 cores in under 15 minutes. |
| Software Dependencies | No | We use PyTorch Lightning for implementation. Each training run of the experiments could be performed on a CPU with 8 cores in under 15 minutes. |
| Experiment Setup | Yes | For all models, we use the Adam optimizer with a learning rate of 0.03. We train our models for a maximum of 100 epochs and apply early stopping. For our method, we fixed λ = 1 and performed random search to tune for [0, 1] for γ. |