Harmonizing Geometry and Uncertainty: Diffusion with Hyperspheres

Authors: Muskan Dosi, Chiranjeev Chiranjeev, Kartik Thakral, Mayank Vatsa, Richa Singh

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate Hyper Sphere Diff on four object datasets and two face datasets, showing that incorporating angular uncertainty better preserves the underlying hyperspherical manifold. Resources are available at: Link. (...) 5. Experiments We evaluate Hyper Sphere Diff on four object datasets (CIFAR-10 (Krizhevsky et al., 2009), MNIST (Deng, 2012), CUB-200 (Wah et al., 2011) and Cars-196 (Krause et al., 2013)), and two face datasets (Celeb A (Liu et al., 2015) and D-LORD (Manchanda et al., 2023)). (...) Table 1. Performance comparison of Gaussian and Hyper Sphere Diff across six datasets using FID (lower is better), HCR (lower is better), and HDS (lower values indicates harder samples). The v MF model demonstrates superior capability in generating challenging samples with better FID and HDS scores. (...) Ablation Study: Table 2 presents FID scores comparison on Celeb-A and D-LORD datasets under three noise strategies: Gaussian, Spherical (Hyper Sphere Diff), and a hybrid of both.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, Indian Institute of Technology Jodhpur, India. Correspondence to: Muskan Dosi <EMAIL>.
Pseudocode Yes Algorithm 1 Hyper Sphere Diff Training: v MF Diffusion with Hypercone Preservation (...) Algorithm 2 Hyper Sphere Diff Testing: Sampling from v MF Diffusion with Class Guidance (...) Algorithm 3 Hypercone-Constrained Sampling with Learned Truncation
Open Source Code No Resources are available at: Link. -> The abstract mentions 'Link', which is not a specific URL or repository and is therefore ambiguous.
Open Datasets Yes We evaluate Hyper Sphere Diff on four object datasets (CIFAR-10 (Krizhevsky et al., 2009), MNIST (Deng, 2012), CUB-200 (Wah et al., 2011) and Cars-196 (Krause et al., 2013)), and two face datasets (Celeb A (Liu et al., 2015) and D-LORD (Manchanda et al., 2023)).
Dataset Splits Yes We evaluate Hyper Sphere Diff on four object datasets (CIFAR-10 (Krizhevsky et al., 2009), MNIST (Deng, 2012), CUB-200 (Wah et al., 2011) and Cars-196 (Krause et al., 2013)), and two face datasets (Celeb A (Liu et al., 2015) and D-LORD (Manchanda et al., 2023)). (...) Figure 6 illustrates the feature representations of conditional samples generated from the 10-class MNIST dataset (...) Figure 9 illustrates the feature representations of conditional samples generated from the 10 classes of the CIFAR-10 dataset.
Hardware Specification Yes For training, we use the Adam optimizer with a learning rate of 1e-4 and batch size of 128, with the model trained for 100K iterations on a single NVIDIA A100 GPU.
Software Dependencies No For training, we use the Adam optimizer with a learning rate of 1e-4 and batch size of 128, with the model trained for 100K iterations on a single NVIDIA A100 GPU. -> The paper mentions the Adam optimizer but does not specify any software libraries or frameworks with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes For training, we use the Adam optimizer with a learning rate of 1e-4 and batch size of 128, with the model trained for 100K iterations on a single NVIDIA A100 GPU.