Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Fairness in Influence Maximization through Randomization

Authors: Ruben Becker, Gianlorenzo D’Angelo, Sajjad Ghobadi, Hugo Gilbert

JAIR 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We report on an experimental study on the two probabilistic maximin problems. In fact, we provide implementations of multiplicative-weight routines for both the set-based and the node-based problems. ... In our study we use random, artificial, as well as real-world instances. On these instances, we compare our methods, both in terms of fairness and efficiency (i.e., total spread) with a standard implementation of the greedy algorithm for influence maximization and the (very straightforward) methods proposed by Fish et al. (2019) as well as the more involved method due to Tsang et al. (2019).
Researcher Affiliation Academia Ruben Becker EMAIL Gianlorenzo D Angelo EMAIL Sajjad Ghobadi EMAIL Gran Sasso Science Institute, L Aquila, Italy Hugo Gilbert EMAIL Université Paris-Dauphine, Université PSL, CNRS, LAMSADE, 75016 Paris, France
Pseudocode Yes Algorithm 1: Multiplicative Weight(G, C, k, ε, δ, η)
Open Source Code Yes The code can be downloaded from https://github.com/sajjad-ghobadi/fair_maximin_im
Open Datasets Yes We use the following networks: (1) Random instances generated according to the Barabási-Albert model (Albert & Barabási, 2002)... (3) The publicly available synthetic instances from the work of Tsang et al. (2019). (4) Real world instances from the SNAP database (Leskovec & Krevl, 2014) and a paper by Guimerà et al. (2003)
Dataset Splits No The paper describes generating graphs for experiments (e.g., '5 runs on each of 5 graphs generated according to the respective graph model') and averaging results, but it does not specify explicit train/test/validation splits for datasets, which are typically associated with supervised learning tasks. This paper focuses on evaluating algorithms on full network instances.
Hardware Specification Yes All experiments were executed on a compute server running Ubuntu 16.04.5 LTS with 24 Intel(R) Xeon(R) CPU E5-2643 3.40GHz cores and a total of 128 GB RAM.
Software Dependencies Yes We implement the multiplicative-weight routines for both the setbased and the node-based problems in C++ and the routines from Fish et al. in python using networkx (Hagberg et al., 2008) for graph related computations. ... We use their python implementation as provided while choosing gurobi as solver... The code was executed with python version 3.7.6 and C++ implementations were compiled with g++ 7.5.0.
Experiment Setup Yes We choose the η parameter of the multiplicative weight routine (see Theorem 4.6) to be 0.1. ... For the evaluation of σv(x) we choose to approximate the value using a Chernoffbound in a way to obtain an additive ε-approximation of the values with probability 1 δ and in the reported experiments we choose both parameters as 0.1. We use the IC model as diffusion model in all our experiments. For most of our experiments and unless stated differently, we choose edge weights uniformly at random in the interval [0, 0.4] for random instances and the synthetic instances of Tsang et al., and in the interval [0, 0.2] for the real world instances.