Neural Conjugate Flows: A Physics-Informed Architecture with Flow Structure

Authors: Arthur Bizzi, Lucas Nissenbaum, João M. Pereira

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate in numerical experiments how this topological group structure leads to concrete computational gains over other physics informed neural networks in estimating and extrapolating latent dynamics of ODEs, while training up to five times faster than other flow-based architectures. We present numerical experiments.
Researcher Affiliation Academia Instituto de Matem atica Pura e Aplicada (IMPA), Rio de Janeiro, Brazil EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes procedural steps and architectures using figures and text (e.g., Figure 4: The NCF pipeline, section 3.1 Neural Conjugation), but it does not contain a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Code https://github.com/arthur-bizzi/Neural-Conjugate Flows-AAAI
Open Datasets No The paper uses synthetic data generated from numerical integration of models (Fitz Hugh-Nagumo, Hodgkin-Huxley) and does not provide specific access information (link, DOI, repository, or formal citation for a dataset) for a publicly available or open dataset.
Dataset Splits No The paper describes subdividing time intervals for sampling training points (e.g., "subdivided the time-interval uniformly in N = 100 samples ti") but does not specify distinct training, validation, and test dataset splits in terms of percentages, absolute counts, or predefined partitions for reproducibility.
Hardware Specification Yes They were executed on the same machine, equipped with an AMD Ryzen 9 5900HX processor, an RTX 3060 GPU and 16GB of RAM.
Software Dependencies No The paper mentions using "Pytorch" and the "Torch Dyn library" but does not specify their version numbers for reproducibility.
Experiment Setup Yes Each model was trained for 2000 epochs, full-batch, and optimized with ADAM (Kingma and Ba 2014). The optimizer was set up with Learning rate α = 1 10 3 and decay parameters β = (0.9, 0.99) for the first experiment, and α = 2.5 10 3 and β = (0.9, 0.95) for the second. It also mentions Xavier initialization and tanh activations, and a Gaussian Fourier feature layer with σ = 2.