Dynamic Graph Learning with Static Relations for Credit Risk Assessment
Authors: Qi Yuan, Yang Liu, Yateng Tang, Xinhuan Chen, Xuehao Zheng, Qing He, Xiang Ao
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two real-world business datasets demonstrate that our proposed method achieves a 0.85% 2.5% improvement over existing SOTA methods. |
| Researcher Affiliation | Collaboration | 1University of Chinese Academy of Sciences, CAS 2Tencent Weixin Group 3Institute of Intelligent Computing Technology, Suzhou, CAS EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using text and mathematical equations, but it does not contain a clearly labeled pseudocode block or algorithm. |
| Open Source Code | No | Our DGNN-SR model is implemented using Pytorch and Pytorch Geometric frameworks. The paper states the frameworks used but does not provide a specific link or explicit statement of code release for the described methodology. |
| Open Datasets | No | We collect two real-world datasets2 called D1 and D2 from Tencent Mobile Payment, ensuring full compliance with security and privacy policies. The datasets in this paper were properly sampled only for testing purposes and do not imply any commercial information. All users private information is removed from the dataset. Besides, the experiment was conducted locally on Tencent s server by formal employees who strictly followed data protection regulations. |
| Dataset Splits | Yes | The two datasets are split chronologically into training, validation, and test sets according to user timestamps. The respective ratios for training, validation, and test sets in Dataset D1 are approximately 12:1:1, while those for Dataset D2 are approximately 20:1:1. |
| Hardware Specification | Yes | All experiments are conducted on a server with an A100 GPU. |
| Software Dependencies | No | Our DGNN-SR model is implemented using Pytorch and Pytorch Geometric frameworks. The paper mentions the software frameworks used but does not specify their version numbers. |
| Experiment Setup | Yes | For optimization, we employ the Adam W Optimizer with a learning rate of 1e-4. In terms of parameters, we utilize a 2-layer MLP classifier with a hidden dimension of 256. The binary features in the time semantic encoder contain two items: is Weekend and is Holiday. The batch size is set to 8000, and all models are trained for 100 epochs, incorporating an early stopping strategy with a patience of 5. |