Neural Variable-Order Fractional Differential Equation Networks

Authors: Wenjun Cui, Qiyu Kang, Xuhao Li, Kai Zhao, Wee Peng Tay, Weihua Deng, Yidong Li

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 4, titled 'Experiments', the paper details 'Datasets and Training details', 'Performance and Analysis', and presents 'Test loss' tables and 'Node classification results'. This explicitly indicates empirical studies with data analysis.
Researcher Affiliation Academia All listed affiliations are universities: Beijing Jiaotong University, University of Science and Technology of China, Anhui University, Nanyang Technological University, and Lanzhou University. The corresponding author's email (EMAIL) also points to an academic institution.
Pseudocode No The paper describes mathematical formulations and numerical solvers, such as the L1 Predictor and ABM Predictor, but it does not include any clearly labeled pseudocode or algorithm blocks. The methods are explained in textual and mathematical forms.
Open Source Code Yes The implementation code is available at https://github.com/cuiwjTech/ AAAI2025_NvoFDE.
Open Datasets Yes The paper mentions several well-known public datasets in Section 4: 'Disease and Airport datasets', 'Cora Citeseer Pubmed Coauthor CS Computer Photo Coauthor Phy ogbn-arxiv Airport Disease' (Section 4.2), 'Roman-empire, Wiki-cooc, Minesweeper, Questions, Workers, and Amazon-ratings' (Section 4.3), and 'Fashion-MNIST' and 'CIFAR' (Section 4.4).
Dataset Splits Yes For the Disease and Airport datasets, we employ the same data splitting and pre-processing methods as detailed in (Chami et al. 2019). For the remaining datasets, we follow the experimental settings used in GRAND (Chamberlain et al. 2021a) and F-GRAND, applying random splits to the largest connected component of each dataset. The dataset splits used in this section follow the same approach as in (Platonov et al. 2023).
Hardware Specification Yes All experiments are implemented using the Py Torch framework (Paszke et al. 2019) on a single NVIDIA RTX A4000 16GB GPU.
Software Dependencies No The paper mentions 'implemented using the Py Torch framework (Paszke et al. 2019)' but does not provide a specific version number for PyTorch or any other software libraries used. Only a software name without a version is insufficient.
Experiment Setup Yes For the training set, we discretize the time interval [0, 1] uniformly for simplicity, obtaining points tj where j is a positive integer with j = 10, 20, 30, 40, etc. We use the Adam algorithm (Kingma and Ba 2014) with iterations of 200, 500, 1000, 1500 and 2000. The learning rate is set to 0.01. The model employs a three-layer neural network, as shown in Fig.1, with the hidden layer consisting of 30 neurons.