Mitigating Message Imbalance in Fraud Detection with Dual-View Graph Representation Learning
Authors: Yudan Song, Yuecen Wei, Yuhang Lu, Qingyun Sun, Minglai Shao, Li-e Wang, Chunming Hu, Xianxian Li, Xingcheng Fu
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct experiments to verify the validity of Mimb FD. Specifically, we aim to answer the following research questions: RQ1: How does our model performance compare to existing state-of-the-art baselines? RQ2: How do the modules of TMR and LCD benefit the prediction? RQ3: How does the Mimb FD perform with different hyperparameters? RQ4: Is the Mimb FD able to find fraudsters while still maintaining close ties within the two groups? RQ5: Can the Mimb FD capture supervisory messages about fraudsters across different levels of imbalance settings? |
| Researcher Affiliation | Academia | 1Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, China 2Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, China 3School of Software, Beihang University, Beijing, China 4SKLCCSE, School of Computer Science and Engineering, Beihang University, China 5School of New Media and Communication, Tianjin University, China EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper includes mathematical equations and descriptions of the methodology, but it does not contain any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about the release of source code or provide any links to a code repository. |
| Open Datasets | Yes | Dataset. Three multi-relation graph fraud datasets, Yelp Chi (Rayana and Akoglu 2015), Amazon (Mc Auley and Leskovec 2013), and Comp [Wu et al., 2023] are used. |
| Dataset Splits | Yes | For the dataset split, we divide it into three parts: training set, validation set, and test set, with ratios of 4:2:4, respectively, following common settings. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU models, memory). |
| Software Dependencies | No | The paper mentions "Pytorch and DGL" as software used, but does not specify their version numbers. |
| Experiment Setup | Yes | Experimental Setting. In our experiments, Adam is chosen as the optimizer. We implement our method through Pytorch and DGL. For the dataset split, we divide it into three parts: training set, validation set, and test set, with ratios of 4:2:4, respectively, following common settings. For GCN, GAT, Graph Sage, and Re Node, we convert multi-relational graphs to isomorphic graphs as input. For fraud detection methods, we use publicly available source code and input multi-relational graphs. |