Deep Disentangled Metric Learning

Authors: Jinhee Park, Jisoo Park, Dagyeong Na, Junseok Kwon

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We analyzed the performance of our method by integrating our disentangled module into existing DML methods to highlight the benefits of disentanglement. We compared the proposed method with other state-of-the-art DML approaches and evaluated our method against regularization techniques. Finally, we conducted an ablation study on hyperparameters to interpret the impact of each loss term.
Researcher Affiliation Academia Jinhee Park1, Jisoo Park2, Dagyeong Na2, Junseok Kwon1,2 1School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea 2Department of Artificial Intelligence, Chung-Ang University, Seoul, Korea EMAIL
Pseudocode No The paper describes the methodology using mathematical formulations and prose, but it does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format.
Open Source Code No The paper does not contain any explicit statements about the release of source code, nor does it provide a link to a code repository in the main text. While 'supplementary materials' are mentioned, there is no indication that code is provided there.
Open Datasets Yes For experiments, we followed the protocol outlined in (Oh Song et al. 2016). To evaluate the DML methods, we utilized several benchmark datasets for metric learning: Caltech-UCSD Birds (CUB) (Wah et al. 2011), CARS196 (CAR) (Krause et al. 2013), and Stanford Online Products (SOP) (Oh Song et al. 2016).
Dataset Splits No The paper states: 'For experiments, we followed the protocol outlined in (Oh Song et al. 2016).' However, it does not explicitly provide the specific training/test/validation splits (e.g., percentages, counts) used for the datasets within its main content.
Hardware Specification No The paper does not explicitly mention any specific hardware details such as GPU models, CPU types, or cloud computing instances used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers, such as programming languages, libraries, or frameworks (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup No The paper states: 'Therefore, hyperparameters, such as the optimizer type, learning rate, weight decay parameter, embedding space dimension, and batch size, were kept consistent with those used in the baseline methods.' While it mentions the types of hyperparameters, it does not provide their specific values in the main text, instead deferring to the baseline methods without explicitly listing them.