SAMBA: A Generic Framework for Secure Federated Multi-Armed Bandits

Authors: Radu Ciucanu, Pascal Lafourcade, Gael Marcadet, Marta Soare

JAIR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 6, we report on a proof-of-concept empirical evaluation that shows the feasibility and scalability of Samba. We present a proof-of-concept empirical study of Samba.
Researcher Affiliation Academia Radu Ciucanu EMAIL Univ. Grenoble Alpes, CNRS LIG, France Pascal Lafourcade EMAIL Univ. Clermont Auvergne, CNRS LIMOS, France Gael Marcadet EMAIL Univ. Clermont Auvergne, CNRS LIMOS, France Marta Soare EMAIL Univ. Grenoble Alpes, CNRS LIG, France
Pseudocode Yes We give pseudocode in Figure 6. Figure 6: Pseudocode of Samba participants.
Open Source Code Yes All details concerning our Samba prototype are available on a public Git Hub repository7, including our source code, the data, the generated results from which we obtained our plots, and scripts that allow to install the needed libraries and reproduce the entire workflow to generate our plots. 7. https://github.com/gamarcad/paper-samba-code
Open Datasets Yes We study the feasibility and scalability of Samba through a proof-of-concept experimental study using two datasets that contain user ratings for movies i.e., Movie Lens (Harper & Konstan, 2016) and jokes i.e., Jester (Goldberg, Roeder, Gupta, & Perkins, 2001), respectively.
Dataset Splits No The paper describes how the bandit problem is constructed from the datasets (e.g., computing mean rewards for items as arms) and mentions varying the budget (N) and number of arms (K). However, it does not explicitly provide information about traditional training, validation, and test dataset splits as typically found in supervised machine learning for model evaluation. The multi-armed bandit setting involves sequential decision-making rather than static dataset partitioning for model training and evaluation.
Hardware Specification Yes We did our experiments on a virtual machine running Ubuntu, located in a server with 8GB of RAM and 24 cores Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz.
Software Dependencies No We used Python 3. For AES-GCM we used the Cryptography library5 and keys of 256 bits. For Paillier, we used the phe library6 in the default configuration with keys of 2048 bits. The paper mentions software components (Python, Cryptography library, phe library) but does not provide specific version numbers for these libraries.
Experiment Setup No We tuned the algorithm-specific parameters (ε, τ, β) similarly to an existing technique (Kuleshov & Precup, 2014). The paper states that algorithm-specific parameters were tuned by referring to another technique but does not explicitly provide the concrete values of these hyperparameters (ε, τ, β) used in the experiments.