Deep-Union Completion
Authors: Siddharth Baskar, Karan Vikyath Veeranna Rupashree, Daniel L. Pimentel-Alarcón
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on over 10 real datasets show that our method consistently outperforms the state-of-the-art accuracy by more than a staggering 40%. Our model is capable of achieving exceptional accuracy on real datasets, as evidenced by our experiments on the COIL20(Nene et al. 1996), Extended Yale B (Lee, Ho, and Kriegman 2005), ORL (Samaria and Harter 1994), Boston Housing (Harrison and Rubinfeld 1996), Period Changer (Gül and RAHIM 2022), Heart Disease (Janosi and Detrano 1988), Human Activity Recognition Using Smartphones (Reyes-Ortiz and Parra 2012), The Oxford IIIT Pet (Parkhi et al. 2012), and 102 Category Flower (Nilsback and Zisserman 2008) datasets. |
| Researcher Affiliation | Academia | Siddharth Baskar1*, Karan Vikyath Veeranna Rupashree1*, Daniel L Pimentel-Alarcón1,2 1Wisconsin Institute for Discovery 2University of Wisconsin-Madison |
| Pseudocode | No | The paper describes the architecture and methodology in narrative text within sections like 'Deep Union Completion Architecture' and its subsections, but does not present any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Git Hub Repository https://github.com/Karan Vikyath/DUC |
| Open Datasets | Yes | Our model is capable of achieving exceptional accuracy on real datasets, as evidenced by our experiments on the COIL20(Nene et al. 1996), Extended Yale B (Lee, Ho, and Kriegman 2005), ORL (Samaria and Harter 1994), Boston Housing (Harrison and Rubinfeld 1996), Period Changer (Gül and RAHIM 2022), Heart Disease (Janosi and Detrano 1988), Human Activity Recognition Using Smartphones (Reyes-Ortiz and Parra 2012), The Oxford IIIT Pet (Parkhi et al. 2012), and 102 Category Flower (Nilsback and Zisserman 2008) datasets. |
| Dataset Splits | No | The paper describes introducing 20%, 50%, and 80% missing entries into the datasets for experiments and evaluation, but it does not specify traditional training, validation, and test splits for the datasets themselves. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers that would be needed to replicate the experiment. |
| Experiment Setup | Yes | For training we do not have a set epoch value for termination but rather the termination happens when the learning rate reaches a value of the original learning rate/10. We implemented PRe Lu as the activation function for the neural layers in the pseudo-completion layer. For the purposes of our experiments, the synthetic dataset dimensions were set to 200 50 (m n), with parameters set as mk = 50, K = 4, d = 3, and n = 50, including a minor noise factor σ = 0.01. Table 1 in Appendix C gives exact details of each of these datasets, along with the exact parameters used for our implementation and experiments. |