Rhomboid Tiling for Geometric Graph Deep Learning

Authors: Yipeng Zhang, Longlong Li, Kelin Xia

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of RTPool on graph classification tasks using seven real-world graph datasets from the commonly utilized TUDataset benchmark. [...] The proposed model demonstrates superior performance, outperforming 21 state-of-the-art competitors on all the 7 benchmark datasets. [...] Table 1. Performance of different models on benchmark datasets. [...] We also conducted an ablation study on the COX2, MUTAG, and PTC MR datasets to analyze the impact of replacing RTPooling with trivial mean pooling in the model, the effect of different GNNs models used for feature updates in the pooling layers, and the influence of the choice of the underlying graph during updates. The detailed results are presented in Table 2, 3 and 4.
Researcher Affiliation Academia 1Division of Mathematical Sciences, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, 637371, Singapore 2School of Mathematics, Shandong University, Jinan, 250100, China. Correspondence to: Kelin Xia <EMAIL>.
Pseudocode No The paper provides detailed mathematical definitions, theorems, and proofs related to Rhomboid Tiling and its clustering properties (e.g., Section 3.1, 3.2, 3.3, and Appendix A). It describes the methodology in prose and mathematical equations (e.g., Equation 2 and 3), but there are no explicitly labeled pseudocode blocks or algorithm listings with structured steps.
Open Source Code Yes The code for our proposed method is available at https://github.com/ZhangYipeng01/RT_pooling.
Open Datasets Yes We evaluate the performance of RTPool on graph classification tasks using seven real-world graph datasets from the commonly utilized TUDataset benchmark. Among these, three datasets represent chemical compounds, while the remaining four datasets are molecular compounds datasets. Chemical Compound Datasets The chemical compound datasets include COX-2 (Sutherland et al., 2003), BZR (Sutherland et al., 2003), and MUTAG (Debnath et al., 1991). Molecular Compound Datasets The molecular compound datasets include PTC MM, PTC MR, PTC FM, and PTC FR (Chen et al., 2007).
Dataset Splits Yes To ensure a fair and consistent evaluation, we adopt the same seeds as Wit-Topo Pool for a 90/10 random training/test split, guaranteeing identical training and test sets.
Hardware Specification Yes In our study, the experiments were conducted on a machine equipped with NVIDIA RTX A5000 GPUs with 32GB of memory.
Software Dependencies No The paper does not explicitly mention specific versions of software libraries (e.g., PyTorch, TensorFlow, Python, scikit-learn) used in the implementation. It mentions applying a GIN layer and MLP, but without version details for the underlying frameworks or libraries.
Experiment Setup Yes In our study, the experiments were conducted on a machine equipped with NVIDIA RTX A5000 GPUs with 32GB of memory. To enhance the model s performance across various datasets, we carefully selected appropriate hyperparameter settings, including learning rate, dropout ratio, and number of pooling layers, detailed in the Appendix Materials. The number of epochs was set to 500, and each dataset was evaluated five times, with the mean value used as the final metric and the standard deviation recorded. [...] Appendix B. Hyperparameter Settings: The hyperparameters of our RTPool model follow a default configuration, as shown in Table 6. For different datasets, specific hyperparameters were adjusted to optimize performance, with the detailed settings presented in Table 7. (Table 6 lists: batch size 16, #epochs 500, LR 0.001, #pooling layers 2, k 1, final dropout 0.5, weight decay 0.0001).