Multilayer Matrix Factorization via Dimension-Reducing Diffusion Variational Inference
Authors: Junbin Liu, Farzan Farnia, Wing-Kin Ma
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that the proposed diffusion variational inference method leads to improved performance scores compared to several existing methods, including the VAE. In this section, we test the proposed DRD-VI for MMF with the latent priors in (3) and (4). |
| Researcher Affiliation | Academia | 1Department of Electronic Engineering, the Chinese University of Hong Kong, Hong Kong SAR of China 2Department of Computer Science and Engineering, the Chinese University of Hong Kong, Hong Kong SAR of China. Correspondence to: Junbin Liu <EMAIL>, Farzan Farnia <EMAIL>, Wing-Kin Ma <EMAIL>. |
| Pseudocode | No | The paper includes detailed mathematical derivations and descriptions of the proposed method but does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or figures. |
| Open Source Code | No | The paper does not contain an explicit statement that the authors' code for the described methodology is released, nor does it provide a link to a code repository. Footnotes mention 'Codes for CNNAEU', 'Codes for Mi Si CNet', 'Codes for SNMF', 'Codes for DMF', 'Codes for Deep Semi-NMF', 'Codes for DANMF' which refer to benchmark algorithms used for comparison, not the authors' own implementation. |
| Open Datasets | Yes | We conduct experiments on four hyperspectral image datasets as listed in Table 1. ... We test the methods on six datasets: a freely available version of CMU PIE (Sim et al., 2002), Caltech 101 Silhouettes7, Fashion MNIST(Xiao, 2017), GTSRB(Houben et al., 2013), DTD(Cimpoi et al., 2014), and Oxford-IIIT Pet (Parkhi et al., 2012). 7Source of Caltech 101 Silhouettes. |
| Dataset Splits | No | The paper mentions using 'Fashion-MNIST (testing set)', 'GTSRB (testing set)', 'Oxford-IIIT Pet (testing set; resized)', and 'DTD (testing set; resized)', implying the use of predefined test sets. However, it does not provide explicit details for training, validation, or other splits for these or any other datasets (e.g., SAMSON, JASPER, APEX, URBAN, CMU PIE, Caltech 101 Silhouettes) used to reproduce the data partitioning for model training. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU model, CPU type, memory) used to run the experiments. Experimental settings in Appendix B.1 and B.2 focus on model parameters and training configurations, not hardware. |
| Software Dependencies | No | The paper mentions 'We adopt the Adam algorithm (Kingma & Ba, 2015) for optimization,' which refers to an optimization algorithm. However, it does not specify any software libraries or frameworks with their version numbers (e.g., Python, PyTorch, TensorFlow, or specific library versions) that would be needed to replicate the experiments. |
| Experiment Setup | Yes | The settings of DRD-VI are listed in Table 4. ... Table 4. Experimental settings of DRD-VI in abundance estimation. [d1, d2, . . . , d T ] [64, 32, 16, 8, d T ] λ 105 BATCH SIZE ROUND(L/100) EPOCH 500 LEARNING RATE 0.001. ... Table 7 presents the experiment settings of the DRD-VI methods which are the same for all the datasets. Data Type [d1, d2, . . . , d T ] λ Batch Size Epoch Learning Rate Gray Image [256, 128, 64, 32, 16] 106 ROUND(L/100) 500 0.001 Color Image [256, 128, 64, 32, 16] 3 106 ROUND(L/100) 500 0.001. |