Improving Simulated Annealing for Clique Partitioning Problems

Authors: Jian Gao, Yiqi Lv, Minghao Liu, Shaowei Cai, Feifei Ma

JAIR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark instances of the CPP were carried out, and the results suggest that the proposed simulated annealing algorithm outperforms all the existing heuristic algorithms, including five state-of-the-art algorithms.
Researcher Affiliation Academia Jian Gao EMAIL College of Information Science and Technology, Dalian Maritime University, Dalian, China Yiqi Lv EMAIL Minghao Liu EMAIL Shaowei Cai (Corresponding author) EMAIL State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences University of Chinese Academy of Sciences Beijing, China Feifei Ma (Corresponding author) EMAIL Laboratory of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences University of Chinese Academy of Sciences Beijing, China
Pseudocode Yes Algorithm 1: SA with timestamp-based configuration checking SA CC(C, Tinit) Algorithm 2: Function Descent Search(C) Algorithm 3: The framework of iterated local search algorithm SACC
Open Source Code No The C++ source codes of comparative algorithms (ITS, CPP-P3, TMLS SA and MDMCP) are provided by their authors. Those algorithms were compiled with the same compiler and the same option. The SGVNS algorithm was provided as a binary file by its authors. The paper mentions source code for comparative algorithms, but not for the methodology described in this paper.
Open Datasets Yes All instances can be downloaded from the webpages: https://github.com/helloz hilu/MDMCP and https://leria-info.univ-angers.fr/ jinkao.hao/cpp.html.
Dataset Splits No The paper mentions running experiments on 'benchmark instances' and provides cut-off times for different instance sizes, but does not describe any specific training/test/validation splits for these instances themselves. It mentions running algorithms 20 times independently for each instance, which is not a dataset split.
Hardware Specification Yes We perform all experiments in parallel on a workstation with two AMD EPYC 7763 CPUs (2.45GHz and 128 processors) and 1024GB RAM, running Ubuntu 20.04.4 LTS.
Software Dependencies Yes We implemented our algorithm in C++ language, and compiled it with gcc 9.4.0 under the option -O3.
Experiment Setup Yes There is a parameter to control the speed of convergence in our algorithm. We set it as follow: αtemp = 0.98. Moreover, in the part of our simulated annealing algorithm we use the same parameter values as MDMCP does. We set all algorithms with the same cutofftimes as those used in Lu et al. (2021). The cutofftimes are set depending on the size of the instances. The more vertices in an instance, the more time is given for searching. The detailed cutoffvalues are listed in Table 1.