Spatial-Temporal Heterogenous Graph Contrastive Learning for Microservice Workload Prediction

Authors: Mohan Gao, Kexin Xu, Xiaofeng Gao, Tengwei Cai, Haoyuan Ge

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on two datasets, including MS dataset obtained from Ant Group, which is one of the world s largest cloud service providers, demonstrate the superiority of STEAM.
Researcher Affiliation Collaboration 1Mo EKey Lab of Artificial Intelligence, Department of Computer Science and Engineering, Shanghai Jiao Tong University Shanghai, China 2Ant Group, Hangzhou, China
Pseudocode No The paper describes methods through textual descriptions and mathematical equations (e.g., equations 1-8) but does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide any explicit statement about releasing source code, nor does it include a link to a code repository.
Open Datasets Yes MS contains workload information for 46 microservices over a 10-day period. Ali 1 is sourced from the Alibaba Cluster Trace, it contains workload records for 148 machines over 8 days. In the data preprocessing, any outlier values below 0 are set to 0. The datasets are divided into training and testing sets in a 7:1 ratio along the time dimension, with the last 24 hours of the training set used as the validation set. 1https://github.com/alibaba/clusterdata/tree/v2018
Dataset Splits Yes The datasets are divided into training and testing sets in a 7:1 ratio along the time dimension, with the last 24 hours of the training set used as the validation set.
Hardware Specification Yes All models are implemented using Py Torch and trained using the Adam optimizer on a NVIDIA A10 GPU.
Software Dependencies No All models are implemented using Py Torch and trained using the Adam optimizer on a NVIDIA A10 GPU. The paper mentions PyTorch but does not provide a specific version number, nor other software dependencies with versions.
Experiment Setup Yes The hyperparameters for STEAM are set as follows: the learning rate is 0.001, the batch size is 128. The dimension of representation d is 16. All similarity thresholds are set to the value of the top 10% Pearson coeffi-cients. The kernel size for the temporal convolutions is set to 3, with two convolutional layers. The temperature parameter ζ is set to 0.5. The weight coefficient λ is set to 0.9.