Causal Dynamic Variational Autoencoder for Counterfactual Regression in Longitudinal Data
Authors: Mouad El Bouchattaoui, Myriam Tami, BENOIT LEPETIT, Paul-Henry Cournède
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive evaluations on synthetic and real-world datasets show that CDVAE outperforms existing baselines. Moreover, we demonstrate that state-of-the-art models significantly improve their CATE estimates when augmented with the latent substitutes learned by CDVAE approaching oracle-level performance without direct access to the true adjustment variables. ... Extensive experiments on synthetic data and semi-synthetic data derived from real-world datasets such as MIMIC-III (Johnson et al., 2016) validate our approach. ... 5 Experiments |
| Researcher Affiliation | Collaboration | 1Paris-Saclay University, Centrale Supélec, MICS Lab, Gif-sur-Yvette, France 2Saint-Gobain, France |
| Pseudocode | Yes | Algorithm 1 Sinkhorn-Knopp Algorithm ... Algorithm 2 Weighted Wasserstein Distance Computation ... Algorithm 3 Pseudo-code for training CDVAE |
| Open Source Code | Yes | 1The implementation is available at https://github.com/moad-lihoconf/cdvae |
| Open Datasets | Yes | Extensive experiments on synthetic data and semi-synthetic data derived from real-world datasets such as MIMIC-III (Johnson et al., 2016) validate our approach. |
| Dataset Splits | Yes | The experiment is conducted with 5000 samples for training, 500 for validation, and 1000 for testing. ... We conduct our experiments with 1400 patients for training, 200 patients for validation, and 400 patients for testing. |
| Hardware Specification | Yes | All experiments were run on a single NVIDIA Tesla T4 GPU. |
| Software Dependencies | No | We use Pytorch (Paszke et al., 2019) and Pytorch Lightening (Falcon & team, 2019) to implement CDVAE and all baselines. ... William Falcon and The Py Torch Lightning team. Pytorch lightning. https://github.com/Py Torch Lightning/pytorch-lightning, 2019. Version 1.0. --- Explanation: While PyTorch Lightning has a version specified in its citation, PyTorch itself, a key dependency, does not have a version number explicitly stated in the paper text provided. The requirement is for "multiple key software components with their versions". |
| Experiment Setup | Yes | All models are fine-tuned using a grid search over hyperparameters, including architecture and optimizer settings. Model selection is based on the mean squared error (MSE) of factual outcomes on a validation set, which is also used as the criterion for early stopping. More details are in Appendix F. ... We report in the following tables the search space of hyperparameters for all baselines. |