Point-Level Topological Representation Learning on Point Clouds
Authors: Vincent Peter Grande, Michael T Schaub
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct experiments on real world and synthetic data, compare the clustering results with clustering by TPCC, other classical clustering algorithms, and other point features, and demonstrate the robustness of TOPF against noise. In Table 2, we perform an ablation study with respect to the harmonic projections of step 3 of TOPF. ... We introduce the topological clustering benchmark suite, the first benchmark for topological clustering. ... We introduce the topological clustering benchmark suite (Appendix D) and report running times and the accuracies of clustering based on TOPF and other methods and point embeddings, see Table 1. |
| Researcher Affiliation | Academia | 1Department of Computer Science, RWTH Aachen University, Germany. Correspondence to: Vincent P. Grande <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Topological Point Features (TOPF) Input: Point cloud X Rn, maximum homology dimension d N, interpolation coeff. λ. 1. Compute persistent homology with generators in dim. k d. (Sec. 2: Betti Numbers & Persistent Homology) 2. Select set of significant features (bi, di, gi) with birth, death, and generator in F3 coordinates (See Step 2). 3. Embed gi into real space (1), and project into harmonic subspace (2) of SC at step dλ i b1 λ i or λdi + (1 λ)bi. 4. Normalise projections to ek i and compute F i k(x) := avgx σ(ek i l(σ)) for all points x X (3). Output: Features of x X |
| Open Source Code | Yes | Code We provide TOPF as an easy-to-use python package with example notebooks at https://github.com/ vincent-grande/topf installable via pip. |
| Open Datasets | Yes | We introduce the topological clustering benchmark suite (Appendix D) and report running times and the accuracies of clustering based on TOPF and other methods and point embeddings, see Table 1. ... We used WSDesc pretrained on the 3DMatch data ... We use the data pretrained on the Shape Net Part segmentation dataset |
| Dataset Splits | No | The paper primarily evaluates clustering algorithms on its newly introduced topological clustering benchmark suite, reporting metrics like Adjusted Rand Index (ARI). While it mentions running algorithms multiple times (e.g., "We ran each algorithm 20 times") to ensure robustness, it does not specify explicit train/test/validation splits for its own method's evaluation or for the application of pretrained baseline models (WSDesc, DGCNN) on the benchmark suite. The evaluation focuses on how well the algorithms cluster existing ground truth divisions within the datasets, rather than a supervised learning setup with distinct data splits for training and testing. |
| Hardware Specification | Yes | All experiments for TOPF were run on a Apple M1 Pro chipset with 10 cores and 32 GB memory. |
| Software Dependencies | No | For persistent homology computations, we used GUDHI (The GUDHI Project, 2015) ... and Ripserer ( ˇCufar, 2020) ... For the least square problems, we used the LSMR implementation of SciPy (Fong & Saunders, 2011). |
| Experiment Setup | Yes | All the relevant hyperparameters are already mentioned in their respective sections. However, for convenience we gather and briefly discuss them in this section. ... Maximum Homology Dimension d ... Thresholding parameter δ ... Interpolation coefficient λ ... Feature selection factor β ... We have picked λ = 0.3 for all the quantitative experiments, which empirically represents a good choice for a broad range of applications. We chose δ = 0.07 in all our experiments. Empirically, we still found β = 0 to work well in a broad range of application scenarios and used it throughout all experiments. |