Fitting Networks with a Cancellation Trick
Authors: Jiashun Jin, Jingming Wang
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our numerical study suggests that R-SCORE significantly improves over existing spectral approaches in many cases. Also, theoretically, we show that the Hamming error rate of R-SCORE is faster than that of SCORE in a specific sparse region, and is at least as fast outside this region. ... 4 SIMULATION RESULTS We compare R-SCORE with SCORE and a non-convex penalization MLE-based approach by (Ma et al., 2020), which we refer to as np MLE. Our study contains 3 experiments. |
| Researcher Affiliation | Academia | Jiashun Jin Department of Statistics Carnegie Mellon University Pittsburgh, PA 15213, USA EMAIL Jingming Wang Department of Statistics University of Virginia Virginia, VA 22903, USA EMAIL |
| Pseudocode | Yes | Algorithm 1 The Recursive SCORE (R-SCORE) Input: A and K. Initialize with an estimate bΠ by SCORE. For m = 1, 2, . . . , M, Refitting. Update b N using A, bΠ in the most recent step, and the refitting step below. SCORE. Update bΠ by applying SCORE to A b N with the most recent b N. Output: bΠ = [ˆπ1, . . . , ˆπn] . |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code, a link to a repository, or mention of code in supplementary materials. |
| Open Datasets | No | The networks are simulated as follows: fixing (n, K), we first simulate an n n matrix Ωas in the logit-DCBM model as follows. ... Once we have such a matrix Ω, we use it to generate a binary adjacency matrix A. |
| Dataset Splits | No | The paper describes how synthetic networks are simulated from a model, rather than using or splitting a pre-existing dataset. It details the simulation parameters (n, K, F, bn, β) to generate a single network instance for each experiment, but does not involve explicit training/test/validation splits of a larger dataset. |
| Hardware Specification | No | The paper does not specify any hardware details (CPU, GPU, memory, etc.) used for running the simulations or experiments. |
| Software Dependencies | No | The paper mentions 'some popular Python packages, such as scikit-learn' in the introduction as a general reference for logit-link functions, but does not specify any software or library versions used for their implementation or experiments. |
| Experiment Setup | Yes | In Experiment 1, we compare R-SCORE with SCORE (which is viewed as a benchmark). In such settings, approximately, the Signal-to-Noise ratio (SNR) is bn(1 β) (e.g., see Jin et al. (2021a)). It is desirable to choose settings that the SNR is neither too large or too small. Consider four settings (A), (B), (C) and (D). In Setting (A), we fix (n, K) = (2400, 3) and F = Uniform(0.01, 2). We choose bn = 60 and β = 23/30 (and this way, SNR = 14). ... In Experiment 2, we study how the error rates of R-SCORE and np MLE change across different iterations (both algorithms are recursive). Fix (n, K) = (5400, 6). Let Π be generated similarly to Experiment 1 except that πi = ek for nk different i. ... In this experiment, we choose (n1, n2, , n K) = 200 (5, 1.5, 6, 3, 7.5, 4), F = Uniform(0.01, 2), bn = 80, and (β1, β2) = (0.9, 0.6). ... For each m = 1, 2, . . . , 1000, we apply R-SCORE and np MLE with m iterations. |