Trajectory-Dependent Generalization Bounds for Pairwise Learning with φ-mixing Samples

Authors: Liyuan Liu, Hong Chen, Weifu Li, Tieliang Gong, Hao Deng, Yulong Wang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct several numerical experiments to investigate the relationship between the Minkowski dimension and generalization bound. Subsequently, we sought to validate our theoretical findings regarding the convergence properties through these experiments.
Researcher Affiliation Academia 1College of Informatics, Huazhong Agricultural University, Wuhan 430070, China 2Engineering Research Center of Intelligent Technology for Agriculture, Ministry of Education, Wuhan 430070, China 3School of Computer Science and Technology, Xi an Jiaotong University, Xi an 710049, China
Pseudocode No The paper describes theoretical frameworks and experimental procedures narratively, but it does not contain any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about making the source code available, nor does it include a link to a code repository.
Open Datasets No Our experimental design is similar to Birdal et al. [2021] and Dupuis et al. [2023]. Given a fixed positive integer m0, we generate the random series {ei}i 1, which are i.i.d. drawn from Gaussian distribution N (0p, Ip p). For any i 1, let xi = (x1 i , , xp i ) = j=0 ei+j Rp, then the sequence {xi}i 1 is an m0-dependent process and hence φ-mixing [Peng et al., 2023].
Dataset Splits No Given the dataset Z = {z1, , zn}, we employ the SGD algorithm to train three fully connected networks (FCN) with different layers under different parameter settings (batch size: 24, 48, 64, learning rate: 0.1, 0.01, 0.001, sample size n: 225, 900, 1406, 2025, 2756, 4556, 5625, 6806). The paper mentions sample sizes but does not specify how these samples are split into training, validation, or test sets, nor does it describe a cross-validation setup.
Hardware Specification No The paper describes the experimental setup, including network architectures, batch sizes, learning rates, and sample sizes, but does not specify any hardware details like GPU or CPU models used for running the experiments.
Software Dependencies No Indeed, the Minkowski dimension often can be calculated by the persistent homology (PH) dimension dimd Z P H0 WZ,U, which can be further approximated numerically in terms of the PH software provided by P erez et al. [2021]. While a third-party software is mentioned, the paper does not provide specific version numbers for any software libraries or programming languages used in the authors' own implementation.
Experiment Setup Yes Given the dataset Z = {z1, , zn}, we employ the SGD algorithm to train three fully connected networks (FCN) with different layers under different parameter settings (batch size: 24, 48, 64, learning rate: 0.1, 0.01, 0.001, sample size n: 225, 900, 1406, 2025, 2756, 4556, 5625, 6806). During the training, the softmax activation function is employed.