Tokenphormer: Structure-aware Multi-token Graph Transformer for Node Classification

Authors: Zijie Zhou, Zhaoqi Lu, Xuekai Wei, Rongqin Chen, Shenghui Zhang, Pak Lon Ip, Leong Hou U

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that the capability of the proposed Tokenphormer can achieve state-of-the-art performance on node classification tasks. Table 1: Comparison of Tokenphormer with baselines on various datasets.
Researcher Affiliation Academia 1University of Macau 2The Hong Kong Polytechnic University 3Guangdong Institute of Intelligent Science and Technology, China EMAIL; EMAIL; EMAIL; EMAIL
Pseudocode No The paper describes the proposed method using textual descriptions and mathematical formulations (e.g., equations 3, 4, 5, 6, 7, 8) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/Dodo-D-Caster/Tokenphormer
Open Datasets Yes Table 1: Comparison of Tokenphormer with baselines on various datasets... Cora Citeseer Flickr Photo DBLP Pubmed; Table 2: Experiments on Heterogeneous Datasets... Cornell Wisconsin Actor.
Dataset Splits No The paper refers to datasets like Cora, Citeseer, Flickr, DBLP, Pubmed, Cornell, Wisconsin, and Actor for node classification tasks. While these are common benchmark datasets, the main text does not explicitly provide details on the training/test/validation splits (e.g., percentages or specific methodology). It states that 'Detailed descriptions of methods and datasets can be found in Appendix B.1', implying this information might be in the appendix, but it is not present in the main body.
Hardware Specification No The paper does not explicitly provide any specific hardware details such as GPU models, CPU types, or memory used for running the experiments in the main body of the text.
Software Dependencies No The paper does not explicitly list any specific software dependencies with version numbers, such as Python versions or library versions (e.g., PyTorch 1.9), in the main body of the text.
Experiment Setup No The paper describes the overall model architecture and components like Transformer encoder, multi-head self-attention, FFN, and Layer Normalization. It also mentions '10 runs with different random seeds'. However, it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) in the main body of the text.