Low-distortion and GPU-compatible Tree Embeddings in Hyperbolic Space
Authors: Max Van Spengler, Pascal Mettes
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that HS-DTE generates higher fidelity embeddings than other hyperbolic tree embeddings and that Hyp FPE further increases the embedding quality for HSDTE and other methods. The experiments section (Section 5) includes "Ablations", "Embedding Complete m-ary Trees", and "Embedding Phylogenetic Trees" with tables of results and figures comparing performance. |
| Researcher Affiliation | Academia | 1VIS Lab, University of Amsterdam, The Netherlands. Correspondence to: Max van Spengler <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Generalized Sarkar s Delaunay tree embedding Algorithm 2 FPEAddition Algorithm 3 Merge FPEs Algorithm 4 FPERenormalize Algorithm 5 Vec Sum Algorithm 6 Vec Sum Err Branch Algorithm 7 2Sum Algorithm 8 Fast2Sum Algorithm 9 FPEMultiplication Algorithm 10 Accumulate Algorithm 11 FPEReciprocal Algorithm 12 FPEDivision Algorithm 13 FPEtanh 1 |
| Open Source Code | Yes | The code will be made available at https://github.com/maxvanspengler/hyperbolic_tree_embeddings. |
| Open Datasets | Yes | We perform the construction on a phylogenetic tree expressing the genetic heritage of mosses in urban environments (Hofbauer et al., 2016), made available by (Sanderson et al., 1994), using various precisions. The phylogenetic trees describe mosses (Hofbauer et al., 2016), weevils (Marvaldi et al., 2002), the European carnivora (Roquet et al., 2014), and lichen (Zhao et al., 2016), obtained from (Mc Tavish et al., 2015). |
| Dataset Splits | No | The paper focuses on embedding entire tree structures (e.g., m-ary trees, phylogenetic trees) and evaluating distortion metrics on these embedded trees. It does not mention any explicit training, validation, or test dataset splits for machine learning tasks. |
| Hardware Specification | No | The paper mentions "GPU accelerated software" and "GPU-compatible precision is 53 bits" but does not specify any exact GPU or CPU models, processor types, or detailed computer specifications used for experiments. |
| Software Dependencies | No | In this paper, we build upon the most recent arithmetic framework detailed in (Popescu, 2017). We have implemented this framework for PyTorch and extend its functionality to work with hyperbolic embeddings. The paper mentions PyTorch but does not provide a specific version number. |
| Experiment Setup | Yes | MAM is an easily optimizable objective, that we train using projected gradient descent for 450 iterations with a learning rate of 0.01, reduced by a factor of 10 every 150 steps, for every configuration. Each method is applied using float32 representations and a scaling factor of τ = 1.33. For the constructive methods and for h-MDS, a larger scaling factor improves performance, so we use τ = 5. For DO we find that increasing the scaling factor does not improve performance, so we use τ = 1.0. |