Multi-View Empowered Structural Graph Wordification for Language Models
Authors: Zipeng Liu, Likang Wu, Ming He, Zhong Guan, Hongke Zhao, Nan Feng
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental evaluations on standard graph tasks demonstrate competitive performance against other state-ofthe-art (SOTA) approaches. Additionally, our framework ensures certain visual interpretability, efficiency, and robustness, marking the promising successful endeavor to achieve token-level alignment between LLMs and GNNs. |
| Researcher Affiliation | Collaboration | 1College of Management and Economics, Tianjin University 2Laboratory of Computation and Analytics of Complex Management Systems (CACMS), Tianjin University 3AI Lab, Lenovo Research |
| Pseudocode | No | The paper describes the methodology using prose and equations, but it does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Code https://github.com/Timothy914/Dr.E |
| Open Datasets | Yes | To evaluate the efficacy of our framework, Dr.E is tested on three benchmark datasets: Cora (Mc Callum et al. 2000), Pub Med (Sen et al. 2008), and OGBN-Arxiv (Hu et al. 2020). |
| Dataset Splits | Yes | We adhere to the dataset splits commonly employed by other methods, such as those detailed in (He et al. 2023). |
| Hardware Specification | Yes | Our experiments are conducted using 2 NVIDIA A800-SXM4-80GB GPUs. |
| Software Dependencies | No | The paper mentions using "Llama2-7B" and "Lo RA PEFT adjustments" but does not provide specific version numbers for these or other software components. |
| Experiment Setup | Yes | We implement Lo RA PEFT adjustments for Llama2-7B and establish two distinct learning rates for the GNN encoder and LLM decoder, set at 1 10 3 and 1 10 4, respectively, with a weight decay of 5 10 4. The hidden dimension for the SAGE convolution is 4096, matching the token embedding of Llama. |