Densest k-Subgraph Mining via a Provably Tight Relaxation
Authors: Qiheng Lu, Nicholas D Sidiropoulos, Aritra Konar
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental Results In this section, we first compare different choices of the diagonal loading parameter λ and various step sizes for the Frank-Wolfe algorithm. Subsequently, we compare the proposed algorithms with existing algorithms to illustrate the effectiveness of our approaches. Datasets: We pre-processed the datasets by converting directed graphs to undirected graphs and removing self-loops. A summary of the datasets used in this paper is provided in Table 1. All datasets were downloaded from (Leskovec and Krevl 2014), and the sizes of the largest cliques were obtained from (Jursa 2016; Jain and Seshadhri 2020). |
| Researcher Affiliation | Academia | 1 University of Virginia 2 KU Leuven EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Frank-Wolfe for problem (5) Input: The adjacency matrix A, the subgraph size k, and the diagonal loading parameter λ. Output: A solution of the optimization problem (5). 1 L = A + λI 2; 2 while the convergence criterion is not met do 3 gt (A + λI)xt; 4 st supp(topk(gt)); 5 dt st xt; 6 Option I: γt min n 1, (gt)Tdt 7 Option II: γt min n 1, (gt)Tdt 8 xt+1 xt + γtdt; |
| Open Source Code | Yes | Code https://github.com/luqh357/Dk S-Diagonal-Loading |
| Open Datasets | Yes | All datasets were downloaded from (Leskovec and Krevl 2014), and the sizes of the largest cliques were obtained from (Jursa 2016; Jain and Seshadhri 2020). |
| Dataset Splits | No | The paper uses various real-world datasets for experiments but does not provide specific training/test/validation splits for these datasets. The experimental setup involves running algorithms on these graphs to find dense subgraphs, not training a model with data partitioning. |
| Hardware Specification | Yes | all the experiments were conducted on a workstation with 256GB RAM and an AMD Ryzen Threadripper PRO 5975WX processor. |
| Software Dependencies | No | In this paper, all the code is implemented in Python, and all the experiments were conducted on a workstation with 256GB RAM and an AMD Ryzen Threadripper PRO 5975WX processor. We use the built-in Adam W optimizer in Py Torch (Paszke et al. 2019) to solve the unconstrained optimization problem after parameterization. While Python and PyTorch are mentioned, specific version numbers for these software components are not provided. |
| Experiment Setup | Yes | The initialization of the Frank-Wolfe algorithm is chosen to be xi = k/n, i [n] and the initialization of Adam W is chosen to be θi = 0, i [n]. Unless specifically stated, the diagonal loading parameter λ = 1 and the Frank-Wolfe algorithm uses Option I in Algorithm 1 for the step size. Adam W uses learning rates of 3. The maximum number of iterations for both the Frank-Wolfe algorithm and Adam W is set to 200. |