Robust Alzheimer's Progression Modeling using Cross-Domain Self-Supervised Deep Learning

Authors: Saba Dadsetan, Mohsen Hejrati, Shandong Wu, Somaye Hashemifar

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated the performance of different supervised and self-supervised models pretrained on either natural images or medical images, or both. Our extensive experiments reveal that (1) Self-supervised pretraining on natural images followed by self-supervised learning on unlabeled medical images outperforms alternative transfer learning methods... (2) Self-supervised models pretrained on medical images outperform those pretrained on natural images... To evaluate our model's performance, we used the Pearson correlation coefficient (r) and the coefficient of determination (R2).
Researcher Affiliation Collaboration Saba Dadsetan EMAIL Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA, USA Department of Artificial Intelligence, Genentech Inc., South San Francisco, CA, USA
Pseudocode No The paper describes its methodology through textual descriptions and diagrams (Figure 1), but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions using implementations from specific GitHub repositories for SimCLR and Barlow Twins in Appendix A.1 (e.g., "We use an implementation of Sim CLR in Pytorch-lightning repository for creating our framework. https://github.com/Lightning-AI/lightning-bolts.git"), but it does not provide a direct link to the authors' own source code for the specific methodology described in this paper for Alzheimer's progression modeling.
Open Datasets Yes A complete list of the Bio FINDER study group members can be found at www.biofinder.se. Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). Data used in preparation of this article were derived from BIOCARD study, supported by grant U19 AG033655 from the National Institute on Aging. All individuals included in our analysis are around 10k from eight external studies, including ADNI (Petersen et al., 2010), BIOFINDER (Mattsson-Carlgren et al., 2020), FACEHBI (Moreno-Grau et al., 2018), AIBL (Ellis et al., 2009), HABS (Dagley et al., 2017), BIOCARD (Moghekar et al., 2013), WRAP (Langhough Koscik et al., 2021), and OASIS-3 (La Montagne et al., 2019).
Dataset Splits Yes Around 90% of the development set is used for training, while the remaining 10% is set aside for validation. Roughly 30% of the fine-tuning dataset is reserved as an in-study test set, while the remaining data is divided into training and validation sets.
Hardware Specification No The paper does not explicitly describe any specific hardware used to run its experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using specific software implementations (e.g., Sim CLR from Pytorch-lightning repository, Barlow Twins), but it does not provide specific version numbers for these or other key software components like PyTorch, Python, or CUDA.
Experiment Setup Yes A.1 Self-supervised Models Sim CLR In this method, the learning rate is 1e 4, Adam is used as an optimizer and NT-Xent loss is used as a loss function. ... Barlow Twins This method utilizes LARS as an optimizer and 1e 4 as a learning rate scheduler, with Cosine Warmup serving as the learning rate scheduler. ... Sw AV For this method, we use Adam as the optimizer, with a learning rate of 1e 4.