Towards Scalable Topological Regularizers

Authors: Wong Hiu-Tung, Darrick Lee, Hong Yan

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide empirical experiments which demonstrates the efficacy of PPM-Reg as a topological regularizer. First, in Section 6.2, we provide an expository shape matching experiment to illustrate the behavior of PPM-Reg, and provide computational comparisons. Next, in Section 6.3, we apply PPM-Reg to a GAN-based generative modelling problem, consistently improving the generative quality of GANs. Finally, in Section 6.4, we consider a GAN-based semi-supervised learning problem, which demonstrates the effectiveness of PPM-Reg in improving the discriminative ability of GANs.
Researcher Affiliation Collaboration Hiu-Tung Wong1, , Darrick Lee2, , Hong Yan1,3 1Centre for Intelligent Multidimensional Data Analysis, Science and Technology Park, Hong Kong 2School of Mathematics, University of Edinburgh, UK 3Department of Electrical Engineering, City University of Hong Kong, Kowloon, Hong Kong EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methods and computations using mathematical equations and textual explanations, for example, 'PHq(S) = {(tb, td tb)}, tb := max x S d(x, x(2)), td := min x S d(x, x(1))' in Section 3. However, it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes 4Code & supp.: https://github.com/htwong-ai/Scalable Topological Regularizers.
Open Datasets Yes We consider the Celeb A (Liu et al., 2015) and Anime Face (Churchill & Chao, 2019) datasets. We compare the SSL performance with Fashion-MNIST (Xiao, 2017), Kuzushiji-MNIST (Clanuwat et al., 2018) and MNIST. We add a new dataset LSUN Kitchen (Yu et al., 2015) and also use Celeb A (Liu et al., 2015) at a higher resolution. We compare the SSL performance with the dataset SVHN.
Dataset Splits Yes In these experiments, 200 and 400 labels are randomly sampled from the data set. For example, compared with the Baseline, Kuzushiji-MNIST has gain 27.38% improvement with 200 labels. In this experiment, 400 and 600 labels are randomly sampled from the data set. ... SVHN contains 72,657 training samples and 400 and 600 labels constitute only 0.55% and 0.82% of the original datasets, respectively.
Hardware Specification Yes Results are computed on Nvidia Geforce RTX 3060 with Intel Core i7-10700.
Software Dependencies No The paper mentions "torch-topological package" and a "pure PyTorch implementation of PPM-Reg" but does not specify version numbers for these software components or any other libraries used.
Experiment Setup Yes Throughout the experiment, we use gradient descent with momentum as the optimization algorithm. The value of the momentum parameter is 0.9 and the step size is 0.05. ... Throughout the experiment, the hyperparameter of PPM-Reg is fixed as λ = 1, λ0 = 1, λ1 = 6000, σ = 0.1 and s = 2000. For Cramer + PPM-Reg, the weight of the cramer loss is 1.6. For MMD + PPM-Reg, the weight of the MMD loss is 5. For the addition of PPM-Reg case, we fix λ0 = 0.001, λ1 = 0.6, and s = 1024. λ = {1.0, 5.0, 10.0} and σ = {0.05, 0.1, 0.5} are the tuning parameter. In both cases, the standard Adam optimizer with learning rate 1 10 4 is used to train the network. For gω, β1 = 0.0 and β2 = 99. For dθ, β1 = 0.5 and β2 = 0.99. The batch size is 192.