Multi-View Incremental Learning with Structured Hebbian Plasticity for Enhanced Fusion Efficiency

Authors: Yuhong Chen, Ailin Song, Huifeng Yin, Shuai Zhong, Fuhai Chen, Qi Xu, Shiping Wang, Mingkun Xu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on six benchmark datasets show MVIL's effectiveness over state-of-the-art methods. ... We conducted extensive experimental evaluations to demonstrate the superior performance of our method for node classification.
Researcher Affiliation Academia 1College of Computer and Data Science, Fuzhou University, Fuzhou, China 2Guangdong Institute of Intelligence Science and Technology, Hengqin, Zhuhai, China 3University of Electronic Science and Technology of China, Chengdu, China 4Center for Brain Inspired Computing Research, Department of Precision Instrument, Tsinghua University, Beijing, China 5School of Computer Science and Technology, Dalian University of Technology, Dalian, China
Pseudocode Yes Algorithm 1: MVIL
Open Source Code No The paper does not contain any explicit statements about code availability, nor does it provide a link to a code repository.
Open Datasets Yes We adopt six multi-view graph datasets widely used in different domains to evaluate the performance of MVIL compared to state-of-the-art baselines. Table 2 illustrates the details of all six datasets. ... Datasets: 100leaves, Animals, Flower17, NGs, Noisy MNIST, Yale B Extended
Dataset Splits Yes Experimental rounds of the proposed method are 600 per view under 10% randomly labeled data and the average value is taken by repeating the run 3 times.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Experimental rounds of the proposed method are 600 per view under 10% randomly labeled data and the average value is taken by repeating the run 3 times.