Survival Analysis via Density Estimation

Authors: Hiroki Yanagisawa, Shunta Akiyama

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted a series of experimental evaluations to assess the performance of our proposed two-step algorithm. We conducted the experimental procedures on a virtual machine possessing a single CPU devoid of any GPU, equipped with a memory of 64 GB, and operating on Cent OS Stream 9. The software implementation was achieved using Python 3.11.6 and Py Torch 2.1.2. The datasets employed for this purpose were the Dialysis and oldmort datasets, sourced from the Python package Surv Set (Drysdale, 2022).
Researcher Affiliation Industry 1Cyber Agent, Tokyo, Japan. Correspondence to: Hiroki Yanagisawa <EMAIL>.
Pseudocode Yes Algorithm 1 Two-Step (TS) Algorithm
Open Source Code Yes Source Codes. The implementations of our models are accessible at https://github.com/CyberAgentAILab/cenreg.
Open Datasets Yes The datasets employed for this purpose were the Dialysis and oldmort datasets, sourced from the Python package Surv Set (Drysdale, 2022). ... The Framingham (Kannel & Mc Gee, 1979) and PBC (Therneau & Grambsch, 2000) datasets with K = 3 were ones used in (Jeanselme et al., 2023).
Dataset Splits Yes All datasets were randomly split into training (65%), validation (15%), and testing (20%) sets. The results reported in this section are the mean and standard deviation over five random splits.
Hardware Specification Yes We conducted the experimental procedures on a virtual machine possessing a single CPU devoid of any GPU, equipped with a memory of 64 GB, and operating on Cent OS Stream 9.
Software Dependencies Yes The software implementation was achieved using Python 3.11.6 and Py Torch 2.1.2.
Experiment Setup Yes For each dataset, we performed a hyperparameter search to determine the number of neurons in the hidden layers and the learning rate of the optimizer: the number of neurons was chosen from the set {4, 8, 16, 32, 64, 128, 256}, and the learning rate was chosen from the set {0.00001, 0.0001, 0.001, 0.01, 0.1, 1.0, 10.0}. ... The dropout layer was employed with a dropout rate of 0.5, and the ReLU function was utilized as the activation layer. The neural network was trained using the Adam WSchedule Free optimizer (Defazio et al., 2024) with early stopping.