Learning Survival Distributions with the Asymmetric Laplace Distribution
Authors: Deming Sheng, Ricardo Henao
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive results on synthetic and real-world data demonstrate that the proposed method outperforms parametric and nonparametric approaches in terms of accuracy, discrimination and calibration. (from the Abstract) |
| Researcher Affiliation | Academia | 1Duke University. Correspondence to: Ricardo Henao <EMAIL>. |
| Pseudocode | No | The paper describes the model and its learning process mathematically and textually, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code required to reproduce the experiments presented in this paper is available at: https://github. com/demingsheng/ALD. |
| Open Datasets | Yes | For synthetic observed data with synthetic censoring, the input features x are generated uniformly as x U(0, 2)d, where d represents the number of features. The observed variable o p(o|x) and the censored variable c p(c|x) follow distinct distributions, with each distribution parameterized differently, depending on the specific dataset configuration. ... Four of these datasets: METABRIC, WHAS, SUPPORT, and GBSG, were retrieved from the Deep Surv Git Hub repository1. Other details are available in Katzman et al. (2018). The remaining three datasets: TMBImmuno, Breast MSK, and LGGGBM were sourced from c Bio Portal2 for Cancer Genomics. ... 1https://github.com/jaredleekatzman/Deep Surv/ 2https://www.cbioportal.org/ |
| Dataset Splits | Yes | To account for training and model initialization variability, we run all experiments 10 times with random splits of the data with partitions consistent with Table 1. ... A validation set is created by splitting 20% of the training set. |
| Hardware Specification | Yes | Hardware. All experiments were conducted on a Mac Book Pro with an Apple M3 Pro chip, featuring 12 cores (6 performance and 6 efficiency cores) and 18 GB of memory. CPU-based computations were utilized for all experiments, as the models primarily relied on fully-connected neural networks. |
| Software Dependencies | No | The experiments were implemented using the Py Torch framework. ... For implementation, we utilize the concordance index censored function from the sksurv.metrics module, as documented in the scikit-survival API. ... We utilize the np.polyfit function from the Num Py module, as documented in the Num Py API ... We implemented GBM model using the Gradient Boosting Survival Analysis class from the sksurv.ensemble module. ... The RSF model was implemented using the Random Survival Forest class from sksurv.ensemble. (No version numbers provided for PyTorch, scikit-survival, NumPy, or sksurv.ensemble). |
| Experiment Setup | Yes | Hyperparameter settings. All experiments were repeated across 10 random seeds to ensure robust and reliable results. The hyperparameter settings were as follows: Default Neural Network Architecture: Fully-connected network with two hidden layers, each consisting of 100 hidden nodes, using Re LU activations. Default Epochs: 200 Default Batch Size: 128 Default Learning Rate: 0.01 Dropout Rate: 0.1 Optimizer: Adam Batch Norm: FALSE |