Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Autoencoding Hyperbolic Representation for Adversarial Generation
Authors: Eric Qu, Dongmian Zou
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our model is capable of generating tree-like graphs as well as complex molecular data with comparable structure-related performance. |
| Researcher Affiliation | Academia | Eric Qu EMAIL Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley 94720, CA, U.S. Dongmian Zou EMAIL Division of Natural and Applied Sciences, CMCS and DSRC Duke Kunshan University Jiangsu 215316, China |
| Pseudocode | No | The paper describes algorithms and methods using mathematical equations and textual descriptions, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | Dataset Our dataset consists of 500 randomly generated trees. Each tree is created by converting a uniformly random Prüfer sequence (Prüfer, 1918). ... Dataset We train and test our model on the MOSES benchmarking platform (Polykovskiy et al., 2020), which is refined from the ZINC dataset (Sterling & Irwin, 2015)... We train a HAEGAN with the MNIST dataset (Le Cun et al., 2010). |
| Dataset Splits | Yes | The dataset is randomly split into 400 for training and 100 for testing. ...MOSES benchmarking platform (Polykovskiy et al., 2020), which is refined from the ZINC dataset (Sterling & Irwin, 2015) and contains about 1.58M training, 176k test, and 176k scaffold test molecules. ... The MNIST (Le Cun et al., 2010) ... contains 60,000 training images and 10,000 test images of handwritten digits, 0 through 9. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. It discusses computational aspects but not the hardware itself. |
| Software Dependencies | No | The paper mentions the use of 'Riemannian Adam' optimizer and 'Lip Swish activation function', but it does not specify version numbers for these or other key software libraries (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | Manifold curvature: K = 1.0 For all hyperbolic linear layers: Dropout: 0.0 Use bias: True Optimizer: Riemannian Adam (β1 = 0.9, β2 = 0.9) Learning Rate: 1e-4 Batch size: 32 Number of epochs: 20 |