Open-Set Cross-Network Node Classification via Unknown-Excluded Adversarial Graph Domain Alignment

Authors: Xiao Shen, Zhihao Chen, Shirui Pan, Shuang Zhou, Laurence T. Yang, Xi Zhou

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets demonstrate significant outperformance of the proposed UAGA over state-of-the-art methods on O-CNNC.
Researcher Affiliation Academia Xiao Shen1, Zhihao Chen1, Shirui Pan2, Shuang Zhou3, Laurence T. Yang4, 5, and Xi Zhou1* 1Hainan University 2Griffith University 3The Hong Kong Polytechnic University 4Zhengzhou University 5St. Francis Xavier University EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1 UAGA Input: Fully labeled source network 𝒒𝑠= (𝑨𝑠, 𝑿𝑠, 𝒀𝑠) and unlabeled target network 𝒒𝑑= (𝑨𝑑, 𝑿𝑑). 1 Initialize parameters πœƒπΊ, πœƒπΆ, πœƒπ·. 2 while not max epoch of separation stage do
Open Source Code Yes Code https://github.com/3480430977/UAGA
Open Datasets Yes Previous benchmark datasets for closed-set CNNC (Shen et al. 2021) only contain 5 node classes, which limits possible splits of known and unknown classes (i.e. various openness) in the open-set setting. To remedy this, we construct new benchmark datasets to contain more node classes for O-CNNC, i.e., Citation-v1 (C), DBLP-v4 (D) and ACM-v8 (A). They are real-world paper citation networks extracted from Arnet Miner with the papers published in different periods, i.e., between years 1997 and 2003, between years 2004 and 2011, and between years 2012 and 2015, respectively.
Dataset Splits Yes Given a fully labeled 𝒒𝑠= (𝒱𝑠, ℇ𝑠, 𝑨𝑠, 𝑿𝑠, 𝒀𝑠) denote a fully labeled source network ... Let 𝒒𝑑= (𝒱𝑑, ℇ𝑑, 𝑨𝑑, 𝑿𝑑) denote an unlabeled target network... For each OCNNC task, we chose the first 𝐾 classes as known classes, while all the remaining 9 𝐾 classes were re-labeled as the (𝐾+ 1)-th unknown class, following the common setting in OSDA (Liu et al. 2019).
Hardware Specification Yes All experiments were conducted on a single Tesla A40 GPU with 48GB memory.
Software Dependencies Yes The proposed UAGA was implemented by Py Torch 1.7.1 (Paszke et al. 2019) and Deep Graph Library 0.7.2 (Wang et al. 2019).
Experiment Setup Yes UAGA was trained by the Adam optimizer with learning rate of 1e-3. The batch size 𝔹 was set to 2048. The number of training epochs of separation stage and adaptation stage were set as 30 and 200 respectively. The number of layers of the GNN encoder 𝑓𝐺 was set to 1. The number of attention heads in GNN encoder 𝑓𝐺 and node classifier 𝑓𝐢 were set to 8 and 2 respectively. The number of embedding dimensions of each head 𝕕 in the GNN encoder 𝑓𝐺 was set to 32. The weight of target node classification loss ℒ𝑑 (i.e. 𝛽) was set to 0.1. The unknown threshold πœ‡ in rough separation stage was set to 0.5 following OSBP (Saito et al. 2018). The number of top 𝑅 target nodes to form pseudo-unknown set 𝒰 was selected from {2000, 3000, 4000}.