AKRMap: Adaptive Kernel Regression for Trustworthy Visualization of Cross-Modal Embeddings

Authors: Yilin Ye, Junchao Huang, Xingchen Zeng, Jiazhi Xia, Wei Zeng

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Quantitative experiments demonstrate that AKRMap outperforms existing DR methods in generating more accurate and trustworthy visualizations.
Researcher Affiliation Academia 1The Hong Kong University of Science and Technology (Guangzhou) 2The Hong Kong University of Science and Technology 3The Chinese University of Hong Kong (Shenzhen) 4Central South University. Correspondence to: Wei Zeng <EMAIL>.
Pseudocode No The paper describes the methodology using mathematical equations and textual descriptions, but does not include a distinct pseudocode or algorithm block.
Open Source Code Yes Code and demo are available at https: //github.com/yilinye/AKRMap.
Open Datasets Yes To evaluate performance on the cross-modal embedding metric, we select the widely used large-scale T2I dataset, the Human Preference Dataset (HPD) (Wu et al., 2023a). The HPD dataset contains over 430,000 unique images generated by various T2I models, along with their corresponding prompts, in the official training set, and 3,700 images in the test set.
Dataset Splits Yes Specifically, we randomly split the dataset D into training set Dtr and validation set Dvl of ratio 9 : 1 in each epoch.
Hardware Specification Yes Our projection model is trained on one Nvidia L4 GPU with batch size of 1000 and 20 epochs.
Software Dependencies No The visualization tool has been built into an easy-to-use python package with minimal dependencies, requiring only Py Torch and Plotly, which can be seamlessly integrated into interactive computational notebooks.
Experiment Setup Yes Our projection model is trained on one Nvidia L4 GPU with batch size of 1000 and 20 epochs. We use Adam optimizer with a learning rate of 0.002. For the t-SNE and PCA implementations, we use the python sklearn package, where the t-SNE method adopts the Barnes-Hut t-SNE (Van Der Maaten, 2014).