Inductive Global and Local Manifold Approximation and Projection

Authors: Jungeum Kim, Xiao Wang

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have successfully applied both GLo MAP and i GLo MAP to the simulated and real-data settings, with competitive experiments against the state-of-the-art methods.
Researcher Affiliation Academia Jungeum Kim EMAIL Booth School of Business University of Chicago Xiao Wang EMAIL Statistics Department Purdue University
Pseudocode Yes Algorithm 1 Global distance construction Algorithm 2 GLo MAP (Transductive dimensional reduction) Algorithm 3 i GLo MAP (Inductive dimensional reduction)
Open Source Code Yes All implementations are available at https://github.com/Jungeum Kim/i GLo MAP.
Open Datasets Yes The MNIST database contains 70, 000 28 28 grey images with the class (label) information, and is available at http://yann.lecun.com/exdb/mnist/.
Dataset Splits Yes we use 60, 000 images for training (and in the later section, the other 10, 000 images for generalization).
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments. It mentions computational time comparisons but does not specify the hardware on which these were conducted.
Software Dependencies No The paper mentions the use of the 'scikit-learn (Pedregosa et al., 2011)' and 'Scikit-learn Python package Buitinck et al. (2013)' but does not provide specific version numbers for these or other key software components like deep learning frameworks (e.g., PyTorch, TensorFlow) or Python itself.
Experiment Setup Yes We fix all λe to 1 (default), while τ is scheduled to decrease from 1 to 0.1 (default). All other learning hyperparameters are set to their defaults in the i GLo MAP package (learning rate decay = 0.98, Adam s initial learning rate = 0.01, initial particle learning rate = 1, number of neighbors K=15, and mini-batch size=100). GLo MAP was optimized for 300 epochs (500 epochs for MNIST), and i GLo MAP was trained for 150 epochs.