Non-asymptotic Properties of Individualized Treatment Rules from Sequentially Rule-Adaptive Trials

Authors: Daiqi Gao, Yufeng Liu, Donglin Zeng

JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show by numerical examples that without much loss of the test value, our proposed algorithm can improve the training value significantly as compared to existing methods. Finally, we use a real data study to illustrate the performance of the proposed method.
Researcher Affiliation Academia Daiqi Gao EMAIL Department of Statistics and Operations Research The University of North Carolina at Chapel Hill Chapel Hill, NC 27599, USA Yufeng Liu EMAIL Department of Statistics and Operations Research, Department of Genetics, Department of Biostatistics The University of North Carolina at Chapel Hill Chapel Hill, NC 27599, USA Donglin Zeng EMAIL Department of Biostatistics The University of North Carolina at Chapel Hill Chapel Hill, NC 27599, USA
Pseudocode Yes Algorithm 1: Sequentially Rule-Adaptive Trial
Open Source Code No The paper uses the package DTRlearn2 (Chen et al., 2019) to implement the OWL algorithm, but does not provide specific access to the source code for the methodology described in this paper.
Open Datasets Yes We use a real study to illustrate the performance of the proposed method. The Nefazodone CBASP trial was designed to compare the efficacy of several treatment options for patients with nonpsychotic chronic major depressive disorder (MDD) (Keller et al., 2000). ... following Minsker et al. (2016), which referred to Gunter et al. (2007).
Dataset Splits Yes Five-fold cross validation is used here to avoid overfitting. Specifically, the data set is partitioned into five parts randomly. Four of the five parts are used iteratively as training data to apply our algorithm in generating the treatment suggestion. The last part is used as the test set to evaluate the ITR.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments.
Software Dependencies No The paper mentions using 'the package DTRlearn2 (Chen et al., 2019)' but does not specify its version number or any other software dependencies with version details.
Experiment Setup Yes The truncation parameter ϵn is defined as ϵ0n (1 θ)/4, where ϵ0 (0, 0.5], θ (0, 1]... The scheduling parameter γi for SRAT-B is taken as 0.999i... αi = 0.2 for all i for Lin UCB and SRAT-B...