GPromptShield: Elevating Resilience in Graph Prompt Tuning Against Adversarial Attacks
Authors: Shuhan Song, Ping Li, Ming Dun, Maolei Huang, Huawei Cao, Xiaochun Ye
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 EXPERIMENTS; 5.1 EXPERIMENT SETUP; 5.2 PRE-TRAINING STRATEGIES AND PROMPT TUNING; 5.3 PROMPT TUNING UNDER NON-ADAPTIVE ATTACKS; 5.4 PROMPT TUNING UNDER ADAPTIVE ATTACKS; 5.5 ABLATION STUDY. The paper includes multiple tables (e.g., Table 1, 2, 3) reporting performance metrics and comparative results across various datasets and attack scenarios, which are characteristic of experimental research. |
| Researcher Affiliation | Collaboration | 1State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China 3Zhongguancun Laboratory, Beijing, China 4Fliggy, Alibaba Group. The affiliations include both academic institutions (e.g., Chinese Academy of Sciences, University of Chinese Academy of Sciences) and an industry entity (Alibaba Group), indicating a collaborative effort. |
| Pseudocode | No | The paper describes methods like 'Direct Handling' and 'Indirect Amplification' and shows a workflow in Figure 4, but it does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or structured pseudo-code within the text. |
| Open Source Code | Yes | 7 REPRODUCIBILITY STATEMENT The code for this paper is provided at https://github.com/GTLSys Graph/GPrompt Shield. |
| Open Datasets | Yes | Thus, we use several datasets that are the focus of most attacks, including Cora-ML(Mc Callum et al., 2000), citation graph (Cora, Citeseer) (Sen et al., 2008). ... For node classification and link prediction tasks, we select homophilic datasets (Cora, Citeseer, Pubmed) (Sen et al., 2008; Namata et al., 2012) , heterophilic datasets (Wisconsin) (Pei et al., 2020) and explore the prompts on large graph (ogbn-arxiv) (Hu et al., 2020). ... For the graph classification task, ... we chose the molecular dataset MUTAG (Kriege & Mutzel, 2012), the social network dataset COLLAB (Yanardag & Vishwanathan, 2015a), the protein dataset PROTEINS (Wang et al., 2022), and the social network dataset IMDBBINARY (Yanardag & Vishwanathan, 2015b). |
| Dataset Splits | Yes | Under the few-shot setting, there are only k labeled cases for each class in C, i.e., T = n (e Sx1, y1), , (e Sxk |C|, yk |C|) o . ... We conduct experiments in both 5-shot and 10-shot scenarios. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory amounts used for running the experiments. It only mentions the experimental setup in terms of datasets and attack types. |
| Software Dependencies | No | The paper mentions 'Pro G is a library built upon Py Torch' in Appendix A.5 but does not specify version numbers for PyTorch or any other software dependencies. It provides links to repositories for baseline implementations and attack code, but these do not explicitly state the versioned software dependencies used for their own methodology. |
| Experiment Setup | No | To optimize the learnable prompts, we propose three robust auxiliary constraints. ... min θ,pd,ps,po i L( e G i , yi) + αLs + βLkl + γLo. The paper mentions the form of the optimization objective and the existence of hyperparameters (α, β, γ), as well as thresholds (τdegree, τsim, τtune), but it does not provide their specific numerical values in the main text. It also mentions 5-shot and 10-shot scenarios but lacks other concrete hyperparameters like learning rate, batch size, or epochs. |