Distributed Cascaded Manifold Hashing Network for Compact Image Set Representation

Authors: Xiaxin Wang, Haoyu Cai, Xiaobo Shen, Xia Wu

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three benchmark image set datasets demonstrate that the proposed DCMHN achieves competitive accuracies in distributed settings, and outperforms state-of-the-arts in terms of computation and storage efficiency. We evaluate DCMHN on three benchmark image set datasets, demonstrating competitive accuracy in distributed settings and superior performance in terms of computation and storage efficiency compared to stateof-the-art methods.
Researcher Affiliation Academia Xiaxin Wang1 , Haoyu Cai1 , Xiaobo Shen1 , Xia Wu2 1Nanjing University of Science and Technology 2Beijing Institute of Technology EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1 Distributed Cascade Manifold Hashing Network (DCMHN)
Open Source Code No The paper does not contain any explicit statement or link regarding the release of the source code for the methodology described.
Open Datasets Yes Three benchmark image set datasets, i.e., FPHA [Garcia-Hernando et al., 2018], AFEW [Wang et al., 2012b], BBT [Li et al., 2015] are used for experiment.
Dataset Splits Yes Datasets Type #Samples #Training #Testing #Dim FPHA sequence 1150 600 550 63 63 AFEW video 2118 1747 371 400 400 BBT video 4667 3268 1399 512 512 Table 1: Statistics of three datasets. The training set of each image set benchmark is equally distributed across all the nodes in the network to construct distributed data.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or detailed computer specifications used for running the experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes To ensure positive, we add a small regularization matrix Tr(X) α I, where I denotes the identity matrix, and parameter α is empirically set to 103. The dimensions of the two Bi Map layers are set to [20, 10], [70, 35], [80, 40] for FPHA, AFEW, and BBT respectively. The Acc and m AP of the proposed method with respect to different code lengths varying from 16 to 1024 are illustrated in Figure 3. As can be seen, the performance of the proposed method improves as code length increases, and the best performance is generally achieved when code length is set to 256. The Acc and m AP of the proposed method on AFEW with respect to varying from [10 10, 10 4] are shown in Figure 5. It can be observed that with the decrease of µ, the performance remains stable. The good performance can be achieved when µ is suggested to be less than 10 6. The network includes 3 nodes for FPHA and AFEW, and 4 nodes for BBT.