Semialgebraic Neural Networks: From roots to representations
Authors: S David Mis, Matti Lassas, Maarten V de Hoop
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Lastly, we provide example applications of these networks and show they can be trained with traditional deep-learning techniques. ... Section 5 NUMERICAL EXAMPLE: SOLVING LINEAR SYSTEMS ... We also demonstrate through numerical experiments that SANNs can be trained using standard techniques (Sections 5.2 and F.1) |
| Researcher Affiliation | Academia | S. David Mis Rice University EMAIL Matti Lassas University of Helsinki EMAIL Maarten V. de Hoop Rice University EMAIL |
| Pseudocode | Yes | Algorithm 1 Evaluating a SANN |
| Open Source Code | No | The paper does not explicitly state that source code for the methodology is provided, nor does it include any links to code repositories. |
| Open Datasets | No | The paper describes generating data for experiments (e.g., "input matrices X were sampled from a distribution of strictly diagonally dominant matrices" in Section 5.2) and problem settings (e.g., "2x2 rectangular electrical network" in Section F.1) but does not provide access information or specific names for publicly available datasets. |
| Dataset Splits | No | The paper mentions "training data" and reports "Validation accuracy" but does not specify exact dataset splits (e.g., percentages or counts for training, validation, and test sets). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using "Adam optimizer" (Section 5.2) and references "popular libraries for numerically solving ODEs, e.g. Malengier et al. (2018); Kidger (2021)" (Section 3.3). However, it does not specify version numbers for these software components or libraries. |
| Experiment Setup | Yes | We used Adam optimizer to minimize the loss Ltotal from equation (8). ... where λ > 0 is a small, suitably chosen parameter, e.g. λ = 10 2. |