Interpretable Node Representation with Attribute Decoding
Authors: Xiaohui Chen, Xi Chen, Liping Liu
TMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical results demonstrate the advantage of the proposed model when learning graph data in an interpretable approach. We conduct extensive experiments to evaluate and diagnose our model. The results show that node representations learned by our model perform well in link prediction and node clustering tasks, indicating the good quality of these representations. |
| Researcher Affiliation | Academia | Xiaohui Chen EMAIL Department of Computer Science Tufts University Xi Chen EMAIL Department of Computer Science Rutgers University Li-Ping Liu EMAIL Department of Computer Science Tufts University |
| Pseudocode | Yes | The training procedure is shown in Algorithm 1. Algorithm 1 Variational EM for NORAD |
| Open Source Code | No | The paper does not explicitly state that the source code for their methodology is made available, nor does it provide a direct link to a code repository. It mentions a link to OpenReview forum for review purposes, not for code. |
| Open Datasets | Yes | Datasets. We use seven benchmark datasets, including Cora, Citeseer, Pubmed, DBLP, OGBL-collab, OGBN-arxiv, and Wiki-cs (Morris et al., 2020; Hu et al., 2020; Mernyei & Cangea, 2020). |
| Dataset Splits | Yes | We follow the data splitting strategy in Kipf & Welling (2016) and report the mean and standard deviation over 10 random data splits. For Wiki-cs, since all models yield high performance by following the data splitting strategy in VGAE as mentioned in (Mernyei & Cangea, 2020), we lower down the training ratio and use split ratio: train/val/test: 0.6/0.1/0.3. Specifically, we use different training ratios (TRs) and keep 20%, 40%, 60%, and 80% edges in the training set. For the remaining edges, 1/3 are used for validation, and 2/3 are used for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizers' and 'Gumbel-softmax trick' but does not specify any software libraries or packages with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We search the number of cluster K over {32, 64, 128, 256}... We choose K = 256 for all models... For the node decoder ATN, we search (d , d ) over {(128, 64), (64, 32)}. We set d to 128 and d to 64... We set T e = 10 and T m = 10. We use Adam optimizers Kingma & Ba (2014) with a learning rate of 0.001. We use temperature annealing with 0.5 to be the minimum temperature... We optimize the representation for multiple iterations and choose the number of iterations to be 50... Detailed configurations of the encoders of the baselines and our models are shown in Table 11. |