Enhancing Counterfactual Estimation: A Focus on Temporal Treatments

Authors: Xin Wang, Shengfei Lyu, Kangyang Luo, Lishan Yang, Huanhuan Chen, Chunyan Miao

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on both synthetic and real-world datasets demonstrate that CTD-NKO achieves state-of-the-art performance and efficiency.
Researcher Affiliation Academia 1University of Science and Technology of China 2Nanyang Technological University EMAIL, EMAIL, EMAIL,
Pseudocode Yes Algorithm 1 Pseudocode of Training CTD-NKO
Open Source Code Yes Code available at https://github.com/wangxin0126/CTD-NKO IJCAI.
Open Datasets Yes The FS-Tumor dataset has been widely adopted in previous studies evaluating counterfactual estimation over time, such as [Bica et al., 2020; Melnychuk et al., 2022; Lim et al., 2018; Kacprzyk et al., 2024]. MIMIC-III is a comprehensive database that encompasses electronic health records of patients in the intensive care unit and has been widely utilized to evaluate the performance of various models in complex real-world medical settings.
Dataset Splits No The paper mentions using known datasets like FS-Tumor and MIMIC-III, and refers to a 'standard workflow' for benchmarking, but it does not explicitly state the training, validation, or test dataset splits (e.g., percentages, sample counts, or specific predefined splits) within the main text.
Hardware Specification No The paper discusses 'peak GPU memory usage' in its efficiency analysis, but it does not specify the exact GPU models, CPU models, or any other specific hardware components used for running the experiments.
Software Dependencies No The paper states: 'We implement CTD-NKO using the Pytorch Lightning framework and employ the Adam algorithm [Kingma and Ba, 2014] for gradient optimization.' While it mentions 'Pytorch Lightning' and 'Adam algorithm', it does not provide specific version numbers for these software components.
Experiment Setup No The paper refers to Appendix H for detailed parameter settings and hyperparameter tuning ('For detailed parameter settings, please refer to Appendix H.' and 'To guarantee a fair comparison, we perform hyperparameter tuning for these baseline methods (refer to Appendix H for details).'), but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or training configurations within the main text itself.