Association Discovery and Diagnosis of Alzheimers Disease with Bayesian Multiview Learning
Authors: Zenglin Xu, Shandian Zhe, Yuan Qi, Peng Yu
JAIR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In order to examine the performance of the proposed method , we design a simulation study and a realworld study for Alzheimer s Disease. 5.1 Simulation Study... 5.2 Real-World Study on Alzheimer s Disease... The results show that our model achieves the highest prediction accuracy among all the competing methods. Analysis on an imaging genetics dataset for the study of Alzheimer s Disease (AD) indicates that our model identifies biologically meaningful associations between genetic variations and MRI features, and achieves significantly higher accuracy for predicting ordinal AD stages than the competing methods. |
| Researcher Affiliation | Collaboration | Zenglin Xu EMAIL Big Data Research Center School of Computer Science & Engineering University of Electronic Science & Technology of China Chengdu, Sichuan, 611731 China Shandian Zhe EMAIL Department of Computer Science, Purdue University West Lafayette, IN 47906 USA Yuan(Alan) Qi EMAIL Department of Computer Science & Department of Statistics Purdue University West Lafayette, IN 47906 USA Peng Yu EMAIL Eli Lilly and Company, Indianapolis, IN 46225, USA |
| Pseudocode | No | The paper describes the model and inference procedure in mathematical detail, including equations for updating variational distributions (e.g., "The detailed updates are given in the following paragraphs"). However, it does not contain any clearly labeled pseudocode or algorithm blocks with structured steps. |
| Open Source Code | No | Regarding the software implementation, we used the built-in Matlab routine for CCA and the code by (Sun et al., 2011) for sparse CCA. We implemented MRLasso based on the Glmnet package (cran.r-project.org/web/packages/glmnet/index.html). We used the published code for lasso, elastic net, GPOR and Lap SVM. The paper mentions using or implementing methods based on third-party software/packages but does not provide specific access to the authors' own implementation code for the methodology described in this paper. |
| Open Datasets | Yes | We conducted association analysis and diagnosis of AD based on a dataset from Alzheimer s Disease Neuroimaging Initiative(ADNI) 1. The ADNI study is a longitudinal multisite observational study of elderly individuals with normal cognition, mild cognitive impairment, or AD. (Footnote 1: http://adni.loni.ucla.edu/) |
| Dataset Splits | Yes | We partitioned the data into 10 subsets and used 9 of them for training and 1 subset for testing; we repeated the procedure 10 times to generate the averaged test results. And we used the 10-fold cross validation for each run to tune free parameters on the training data. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, or cloud resources) used for running the experiments. |
| Software Dependencies | No | Regarding the software implementation, we used the built-in Matlab routine for CCA and the code by (Sun et al., 2011) for sparse CCA. We implemented MRLasso based on the Glmnet package (cran.r-project.org/web/packages/glmnet/index.html). We used the published code for lasso, elastic net, GPOR and Lap SVM. While software names like 'Matlab', 'Glmnet package', 'lasso', 'elastic net', 'GPOR', and 'Lap SVM' are mentioned, specific version numbers for these tools are not provided, making it difficult to precisely reproduce the software environment. |
| Experiment Setup | Yes | For the precision parameter η, we assign a conjugate prior Gamma prior, p(η|r1, r2) = Gamma(η|r1, r2) where r1 and r2 are the hyperparameters and set to be 10^-3 in our experiments. In our experiment, we set σ^2_1 = 1 and σ^2_2 = 1o 6. We set a difuse and non-informative hyperprior, i.e., l1 = l2 = 1 in our experiments. Similarly for Πh, d1 = d2 = 1 in our experiments. Similarly for πw, e1 = e2 = 1 in our experiments. To determine the dimension k for the latent features U in our method, we computed the variational lower bounds as an approximation to the model marginal likelihood (i.e., evidence), with various k values {10, 20, 40, 60}. We chose the value with the largest approximate evidence, which led to k = 20 (see Figure 5). |