DAMMFND: Domain-Aware Multimodal Multi-view Fake News Detection
Authors: Weihai Lu, Yu Tong, Zhiqiu Ye
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on two real-world datasets demonstrate that the proposed model outperforms state-of-the-art baselines. ... We evaluated our model using two real-world datasets: Weibo (Wang et al. 2018) and Weibo-21 (Nan et al. 2021). |
| Researcher Affiliation | Academia | 1Peking University 2Wuhan University 3Anhui University EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using prose and mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/luweihai/DAMMFND |
| Open Datasets | Yes | We evaluated our model using two real-world datasets: Weibo (Wang et al. 2018) and Weibo-21 (Nan et al. 2021). |
| Dataset Splits | Yes | The Weibo dataset contains 7,532 news articles (3,749 true, 3,783 fake) for training and 1,996 articles (996 true, 1,000 fake) for testing. ... Weibo21, a multi-domain dataset, comprises 9,127 articles (4,640 true, 4,487 fake), which we partitioned into training and test sets following established benchmark procedures. |
| Hardware Specification | Yes | All codes are developed using PyTorch (Paszke et al. 2019) and executed on an NVIDIA RTX 4090 graphics processing unit. |
| Software Dependencies | No | The paper mentions software like PyTorch, BERT, CLIP, and MAE but does not provide specific version numbers for any of these dependencies. |
| Experiment Setup | Yes | In the text data encoding section, we set 197 as the maximum length for text input and utilized pre-trained BERT (Devlin et al. 2018) and CLIP models for text encoding. For the visual data encoding, we first resized the input images to 224 224 pixels and employed pre-trained MAE (He et al. 2021) and CLIP models to encode the image data. ... In the DAMMFND framework s loss formula (Eq. 20), the parameter α is set to 0.25. We set the number of channels ktext, kimg and kmm to 18. |