Asymmetric Cross-Modal Hashing Based on Formal Concept Analysis

Authors: Yinan Li, Jun Long, Zhan Yang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments on MIRFlickr, NUS-WIDE and IAPR-TC12 datasets demonstrate the superior performance of ACHFCA to state-of-the-art hashing approaches.
Researcher Affiliation Academia Yinan Li, Jun Long, Zhan Yang* Big Data Institute, Central South University, Changsha, China EMAIL
Pseudocode Yes Algorithm 1: The optimization of ACHFCA Input: Training instances X(m), label matrix L, balance parameters γ, η, λ, ω, maximum iteration number ξ. Output: Binary codes B.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets Yes In this paper, we use MIRFlickr, NUS-WIDE (Chua et al. 2009) and IAPR-TC12 (Escalante et al. 2010) datasets for evaluation.
Dataset Splits Yes For MIRFlickr dataset, we randomly choose 18,015 image-text pairs as the training set while the residual as the test set. For NUS-WIDE dataset, we select 10 typical tags and randomly choose 1,867 image-text pairs as the query set. For IAPR-TC12 dataset, we randomly divide the image-text pairs into 18,000/2,000 training/test sets. The statistics information of the three datasets are listed in Table 2.
Hardware Specification Yes All experiments are trialed on a server with Intel Xeon Silver 4210 Processor @2.20 GHz, 128G RAM.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes For the proposed ACHFCA, parameters γ, η, λ, ω are selected by utilizing grid search (from 10 5 to 104, 10 times per step). We provide the best performance of parameter configurations in Table 3. In addition, ρ is set to 0.5.