P-SyncBB: A Privacy Preserving Branch and Bound DCOP Algorithm
Authors: Tal Grinshpoun, Tamir Tassa
JAIR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | An extensive experimental evaluation on different benchmarks, problem sizes, and constraint densities shows that P-Sync BB exhibits superior performance to other privacy-preserving complete DCOP algorithms. We conduct extensive experimental evaluation on different benchmarks, problem sizes, and constraint densities, in order to compare the performance of P-Sync BB to the basic Sync BB (for assessing the price of privacy) and for comparing P-Sync BB to the privacy-preserving complete DCOP algorithm, P-DPOP(+), of L eaut e and Faltings (2013). |
| Researcher Affiliation | Academia | Tal Grinshpoun EMAIL Ariel University Ariel, Israel Tamir Tassa EMAIL The Open University Ra anana, Israel |
| Pseudocode | Yes | Algorithm 1 Sync BB (executed by agent Ak) procedure init [...] Algorithm 2 P-Sync BB (executed by agent Ak) first part procedure init [...] Algorithm 2 P-Sync BB (executed by agent Ak) second part procedure assign CPA |
| Open Source Code | No | No, the paper does not provide an explicit statement or link for the source code of the methodology described. While it mentions the use of 'Agent Zero simulator', it does not state that the implementation of P-Sync BB itself is publicly available. |
| Open Datasets | No | No, the paper describes how various problem instances and benchmarks were generated or constructed, such as 'unstructured random DCOPs', 'distributed 3-color graph coloring problems', 'scale-free networks...generated by the Barab asi-Albert model', and 'distributed meeting scheduling problems...construct the problems similarly to the PEAV formulation'. It does not provide concrete access information (links, DOIs, specific repositories, or formal citations for publicly available datasets) for direct download or use of specific datasets. |
| Dataset Splits | No | No, the paper states that results are averaged 'over 50 problem instances (for each setting/benchmark)' but does not provide specific details on training, validation, or test splits for any dataset used. |
| Hardware Specification | Yes | The experiments were run on a hardware comprised of an Intel i7-4600U processor and 16GB memory. |
| Software Dependencies | No | No, the paper mentions that 'All the evaluated algorithms were implemented in the Agent Zero simulator (Lutati, Gontmakher, Lando, Netzer, A., Meisels, A., & Grubshtein, 2014)' but does not provide specific version numbers for the simulator itself or any other key software components, such as programming languages or libraries for multiple-precision integers. |
| Experiment Setup | Yes | For the random DCOPs benchmark this value was n = 6, while for the graph coloring benchmark it was n = 9. (We note that P-DPOP(+) crashed due to lack of memory on some graph coloring instances with n = 9 and p1 = 0.9, so these instances were excluded from the experiment.) [...] We set the value of S (the size of the additive group ZS := [0, S 1] in which we perform our secure protocols, see Section 4.1) to S = 2256. Such a setting mandates the usage of multiple-precision integers. [...] We report results below a cutofftime of 15 minutes. Namely, if in a given problem setting the run over a sequence of 50 problem instances did not complete in 12.5 hours, we do not report any result for that setting (with the exception of the P-DPOP(+) results in Figure 2, which enables a comparison over the full range of constraint densities in that experiment). A result for a given problem setting is omitted also in cases where the experiment ran out of the allocated 8GB heap memory. |