Unsupervised Learning for Combinatorial Optimization with Principled Objective Relaxation

Authors: Haoyu Peter Wang, Nan Wu, Hang Yang, Cong Hao, Pan Li

NeurIPS 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our framework by solving a synthetic graph optimization problem, and two real-world applications including resource allocation in circuit design and approximate computing. Our framework1 largely outperforms the baselines based on naïve relaxation, reinforcement learning, and Gumbel-softmax tricks.
Researcher Affiliation Academia Haoyu Wang Purdue University EMAIL Nan Wu UCSB EMAIL Hang Yang Georgia Tech. EMAIL Cong Hao Georgia Tech. EMAIL Pan Li Purdue University EMAIL
Pseudocode No The paper describes procedures (e.g., Definition 1 for rounding) in text, but no explicitly labeled pseudocode or algorithm blocks are present.
Open Source Code Yes Our code and the datasets are available at: https://github.com/Graph-COM/CO_Proxy Design
Open Datasets Yes Our code and the datasets are available at: https://github.com/Graph-COM/CO_Proxy Design
Dataset Splits Yes We follow the settings in [1] and use a 6:2:2 dataset split for training, validating and testing
Hardware Specification Yes To be fair, all methods run on the same server with a Quadro RTX 6000 GPU.
Software Dependencies No The paper mentions software like PyTorch [76], PyTorch Geometric [77], DGL [66], and the Adam optimizer [78], but does not provide specific version numbers for any of these dependencies.
Experiment Setup Yes The Adam optimizer [78] is adopted with a learning rate of 1e-4 and the batch size of 256. For RL training, we use a constant learning rate of 1e-5. We train both the proxies and the Aθ module for 200 epochs.