Fusing Social Networks with Deep Learning for Volunteerism Tendency Prediction
Authors: Yongpo Jia, Xuemeng Song, Jingbo Zhou, Li Liu, Liqiang Nie, David Rosenblum
AAAI 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | With extensive experimental evaluations, we demonstrate the effectiveness of our model, which outperforms several state-of-the-art approaches in terms of precision, recall and F1-score. |
| Researcher Affiliation | Collaboration | 1NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore 2School of Computing, National University of Singapore, Singapore 3Big Data Lab, Baidu Research, China EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the optimization process using mathematical equations but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper only provides a link to the compiled dataset, not to the source code for the methodology or experiments. 'The compiled dataset is currently publicly available via: http://multiplesocialnetworklearning.azurewebsites.net/' |
| Open Datasets | Yes | The compiled dataset is currently publicly available via: http://multiplesocialnetworklearning.azurewebsites.net/ |
| Dataset Splits | Yes | To avoid overfitting and achieve the best performance, we selected the optimal parameters for each model based on 10-fold cross validation, and we performed another 9-fold cross validation on the training data with grid search in each round (i.e. nested cross-validation). Hence, in each experiment, for each round of the 10-fold cross validation, 90% of the samples were used for training the model with 9-fold cross validation, and the remaining 10% were reserved for testing. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper mentions 'Python' and 'Theano' but does not specify version numbers for these or any other software dependencies needed for reproducibility. |
| Experiment Setup | Yes | To save the cost of memory and computation, we use three hidden layers to construct our FARSEEING model and its variants as well as the DBN and M-DBM, which is sufficient to achieve good performance. ... For the grid search, it was conducted between 10 2 and 102 with small but adaptive step sizes. The step sizes are 0.01, 0.05, 0.5 and 5 for the range of [0.01, 0.1], [0.1, 1], [1, 10] and [10, 100], respectively (Nie et al. 2015). |