Community-Aware Variational Autoencoder for Continuous Dynamic Networks

Authors: Junwei Cheng, Chaobo He, Pengxing Feng, Weixiong Liu, Kunlin Han, Yong Tang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate that the proposed CT-VAE and CTCAVAE achieve more favorable performance compared with the state-of-the-art baselines. ... Extensive experiments on six real-world datasets verify that the proposed CT-VAE outperforms state-of-the-art baselines and demonstrate the promising performance of CT-CAVAE in specific data scenarios.
Researcher Affiliation Collaboration 1School of Computer Science, South China Normal University 2Department of Electrical Engineering, City University of Hong Kong 3CMT US Holdings LLC 4Computer Science Department, University of Southern California
Pseudocode No The paper describes methods and processes in paragraph text and mathematical formulations but does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide links to any code repositories.
Open Datasets Yes To comprehensively evaluate our methods, several continuous dynamic networks from various real-world domains are used. Specifically, these include a co-author network (DBLP), two citation networks (ar Xiv CS and ar Xiv AI), a brain network (Brain), a patent citation network (Patent), and a high school student interaction network (School). ... For a detailed introduction to these datasets, refer to (Liu et al. 2024b).
Dataset Splits No The paper mentions using six real-world datasets but does not provide specific details regarding how these datasets were split into training, validation, or test sets for reproduction.
Hardware Specification Yes All trials have been conducted on Intel Core i76700 CPUs and NVIDIA RTX 3090 GPUs.
Software Dependencies No The paper mentions that 'Adam is employed for training' but does not specify any software libraries or their version numbers, such as Python, PyTorch, TensorFlow, or specific Adam optimizer versions.
Experiment Setup Yes For CT-VAE and CTCAVAE, the Adam is employed for training, with the learning rate selected from {1e 2, 1e 3, 2e 4, 5e 5}. Additionally, we set the negative sampling size to 2 and the history window to 1 in the Hawkes process. To ensure the fairness of our experiments, we initialize the representation Z using node2vec for all methods that require initialization. Furthermore, our proposed methods are trained for 100 epochs with a batch size of 1024.