An Out-Of-Distribution Membership Inference Attack Approach for Cross-Domain Graph Attacks
Authors: Jinyan Wang, Liu Yang, Yuecen Wei, Jiaxuan Si, Chenhao Guo, Qingyun Sun, Xianxian Li, Xingcheng Fu
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that GOOD-MIA achieves superior attack performance in datasets designed for multiple domains. 5 Experiments In this section, we investigate the effectiveness of the proposed attack model in the face of three cross-domain settings with practical significance, aiming to address the following research questions. |
| Researcher Affiliation | Academia | 1Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, China 2Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, China 3School of Software, Beihang University, Beijing, China 4SKLCCSE, School of Computer Science and Engineering, Beihang University, China EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 GOOD-MIA: Out-of-Distribution Membership Inference Attack Approach for Cross-domain Graphs Attack Input: target graph Gtarget, shadow graph Gshadow, Number of training environments M Parameter: ω, ψ Output: Membership prediction 1: Construct M shadow training environments by Eq. (6); 2: Initialization parameters ω, ψ; 3: while Shadow model training epoch < N S epoch do 4: for e = 1, . . . , M do 5: Get the representation h for nodes of Ge w.r.t Eq. (7); 6: end for 7: Train ω by minimizing Eq. (10); 8: end while 9: return Posterior probability for shadow model; 10: while Attack model training epoch < N Attack epoch do 11: Train ω by minimizing Eq. (13); 12: end while |
| Open Source Code | No | The paper does not provide any explicit statement about releasing code or a link to a code repository. Statements like "We release our code..." or a direct URL are absent. |
| Open Datasets | Yes | Datasets. We adopt five node property prediction datasets of different sizes and properties, including Cora, Citeseer, Pubmed, Twitch and Facebook-100. For Cora, Citeseer and Pubmed [Sen et al., 2008], ... The Twitch and Facebook-100 [Rozemberczki and Sarkar, 2021; Traud et al., 2012] represent canonical real-world social networks. |
| Dataset Splits | No | For Twitch, we consider subgraph-level data splits: nodes in subgraph DE are used as target model datasets, while nodes in ENGB, ES, FR, PTBR, RU and TW are used as shadow model datasets to set cross-domain attacks in different domain environments. For Facebook-100, we use John Hopkins, Amherst and Cornell5 as the target datasets, Penn and Reed as the shadow datasets. We also state that each augmented graph GS is divided into two disjoint subgraph, including GTrain S and GTest S, but no specific percentages or sample counts for these splits are provided in the main text. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | We evaluated the target model with 256 neurons in its hidden layer, while the number of neurons in the shadow model varied from 32 to 256. (Fig. 4). The trade-off parameter α is also analyzed in Figure 3. The overall training algorithm in Algorithm 1 also mentions N S epoch and N Attack epoch for the number of epochs. |