Mixture of Knowledge Minigraph Agents for Literature Review Generation

Authors: Zhi Zhang, Yan Liu, Sheng-hua Zhong, Gong Chen, Yu Yang, Jiannong Cao

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate CKMAs on three benchmark datasets. Experimental results show the effectiveness of the proposed method, further revealing promising applications of LLMs in scientific research.
Researcher Affiliation Academia Zhi Zhang1, Yan Liu1,*, Sheng-hua Zhong2, Gong Chen1, Yu Yang3, Jiannong Cao1 1The Hong Kong Polytechnic University, the Department of Computing, Hong Kong, 999077, China 2Shenzhen University, the College of Computer Science and Software Engineering, Shenzhen, 518052, Guangdong, China 3The Education University of Hong Kong, Centre for Learning, Teaching, and Technology, Hong Kong, 999077, China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods through text and diagrams, and provides
Open Source Code Yes Project https://minigraph-agents.github.io/
Open Datasets Yes We evaluate our approach on three public MSDS datasets: Multi-Xscience (Lu, Dong, and Charlin 2020), TAD (Chen et al. 2022), and TAS2 (Chen et al. 2022).
Dataset Splits No The paper describes the datasets used and their format but does not explicitly provide information on how these datasets were split into training, validation, or test sets for reproducibility of the experiments.
Hardware Specification No The paper mentions using GPT-3.5-turbo as the backbone model but does not specify the hardware (GPU, CPU, etc.) used to run the experiments or interact with this model.
Software Dependencies Yes We use GPT-3.5-turbo (0301) as the backbone model with a temperature set to 0.0 for reproducibility.
Experiment Setup Yes We set the chunk size k to 3 and the number of experts E to 3. We set the volume constraint m to 32. We use GPT-3.5-turbo (0301) as the backbone model with a temperature set to 0.0 for reproducibility.