CognTKE: A Cognitive Temporal Knowledge Extrapolation Framework
Authors: Wei Chen, Yuting Wu, Shuhan Wu, Zhiyu Zhang, Mengqi Liao, Youfang Lin, Huaiyu Wan
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results on four benchmark datasets demonstrate that Cogn TKE achieves significant improvement in accuracy compared to the state-of-the-art baselines and delivers excellent zero-shot reasoning ability. ... The overall experimental results of Cogn TKE and baselines on four benchmark datasets are displayed in Table 1. ... To further better understand each component of Cogn TKE that contributes to the prediction results, we conduct ablation studies on ICE14, ICE18, and ICE05-15 datasets. |
| Researcher Affiliation | Academia | 1School of Computer Science & Technology, Beijing Jiaotong University, Beijing, China 2Beijing Key Laboratory of Traffic Data Mining and Embodied Intelligence, Beijing, China 3School of Software, Beijing Jiaotong University, Beijing, China EMAIL |
| Pseudocode | No | The paper describes methods using mathematical formulas and text, but no distinct section or figure explicitly labeled as "Pseudocode" or "Algorithm" is present. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | To evaluate Cogn TKE on entity prediction task, we adopt four benchmark datasets that are widely used for TKG extrapolation, including ICE14, ICE18, ICE05-15 (Jin et al. 2020), and WIKI (Li et al. 2021b). |
| Dataset Splits | Yes | Following the preprocessing strategy (Li et al. 2021b; Han et al. 2021; Li et al. 2022), we split all datasets into training, validation, and test sets with the proportions of 80%/10%/10% based on chronological order. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions various components and models (e.g., FFN, GRU, GAT, MLP) but does not provide specific version numbers for any software libraries or dependencies. |
| Experiment Setup | No | The paper mentions using a "multi-class log-loss to optimize the parameters" but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) in the main text. |