ID-GMLM: Intelligent Decision-Making with Integrated Graph Models and Large Language Models
Authors: Zhenhua Meng, Fanshen Meng, Rongheng Lin, Budan Wu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark datasets demonstrate that ID-GMLM achieves significant performance improvements... In this section, we conduct experiments to address the following research questions: RQ1: To what extent does ID-GMLM improve the decision effectiveness over other intelligent decision models? RQ2: How does each component in ID-GMLM contribute to enhancing decision-making accuracy? RQ3: Does incorporating LLMs into ID-GMLM improve the interpretability of decision-making? RQ4: How do key parameters influence the performance of ID-GMLM? |
| Researcher Affiliation | Academia | Zhenhua Meng, Fanshen Meng, Rongheng Lin*, Budan Wu State Key Laboratory of Networking and Switching Technology Beijing University of Posts and Telecommunications EMAIL. All authors are affiliated with Beijing University of Posts and Telecommunications, an academic institution. |
| Pseudocode | No | The paper describes the methodology using mathematical equations and detailed descriptions of components like APN, CRN, LLMs-Based Criterion Weights Generation, Parameter Tuning Network, and Attention Network, but it does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the described methodology, nor does it provide any links to a code repository. |
| Open Datasets | Yes | We collect six publicly available datasets from the UCI1 and WEKA2 repositories, which are summarized in Table 1. These datasets are selected because they meet two essential conditions: first, the outputs are measured on an ordered categorical scale; second, all criteria have a monotonic influence on the ranking of alternatives. Footnotes provide URLs: 1http://archive.ics.uci.edu/ml/. 2http://www.cs.waikato.ac.nz/ml/weka/datasets.html. |
| Dataset Splits | Yes | We split the data into training, validation, and test sets in a 7:2:1 ratio. |
| Hardware Specification | Yes | Each model is run 100 times on an Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz with 96GB memory, and we report the average results. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' and 'gpt-3.5-turbo and text-embedding-ada-002 models' but does not specify version numbers for general software libraries or frameworks like Python, PyTorch, or TensorFlow, which are essential for full reproducibility. |
| Experiment Setup | Yes | Our model consists of a 2-layer residual GNN with a mean aggregator and hidden layers sized 2-5 times the input layer, alongside a 2-layer fully connected GCN with hidden layers three times the input size. We perform hyperparameter tuning with learning rates and regularization weights from {1e-1, 1e-2, 1e-3, 1e-4, 1e-5}, weight decay values {1e-3, 1e-4}, and k-nearest neighbors ranging from 2 to 10. Training uses the Adam optimizer for up to 500 epochs with early stopping, and initial DM preferences are based on dataset descriptions. |