Flow Matching for Few-Trial Neural Adaptation with Stable Latent Dynamics

Authors: Puli Wang, Yu Qi, Yueming Wang, Gang Pan

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Further experiments across multiple motor cortex datasets demonstrate the superior performance of FDA, achieving reliable results with fewer than five trials. Our FDA approach offers a novel and efficient solution for few-trial neural data adaptation, offering significant potential for improving the long-term viability of real-world BCI applications.
Researcher Affiliation Academia 1College of Computer Science and Technology, Zhejiang University 2The State Key Lab of Brain-Machine Intelligence, Zhejiang University 3MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University. Correspondence to: Yu Qi <EMAIL>, Gang Pan <EMAIL>.
Pseudocode Yes Algorithm 1 Flow-Based Dynamical Alignment (FDA) 1: Input: source domain DS; target domain DT ; alignment method align m; pre-defined η; 2: Output: conditional feature extractor fα; continuous normalizing flow network vθ 3: Initialize fα, vθ 4: Pre-training phase: 5: for iter = 1 to npre train do 6: Sample τ, z S(0) N(0, I), x S, z S(1) = ηy S; 7: Update fα, vθ by Lcfm(α, θ); 8: end for 9: Fine-tuning phase: 10: for iter = 1 to nfine tune do 11: if align m is FDA-MMD: then 12: Sample x S, z S(0) N(0, I) and x T , z T (0) N(0, I); Update fα by Lmmd(α); 13: else if align m is FDA-MLA: then 14: Sample x T , z T (0) N(0, I); Update fα by Lmla(α); 15: end if 16: end for 17: return fα, vθ.
Open Source Code No The paper provides links to the code for baselines such as CEBRA (https://github.com/Adaptive Motor Control Lab/cebra), ERDiff (https://github.com/yulewang97/ERDiff), No MAD (https://github.com/arsedler9/lfads-torch/tree/main), and Cycle-GAN (https://github.com/limblab/adversarial BCI). However, there is no explicit statement or link provided for the open-source code of the FDA methodology itself.
Open Datasets Yes Datasets We employed three distinct datasets of extracellular neural recordings from the primary motor cortex (M1) of non-human primates (Ma et al., 2023). Additional information about the datasets can be found in App. B.1.
Dataset Splits Yes Data Preprocess and Spilt We extracted trials from the go cue time to the trial end . The data was then timestamped and smoothed for firing rates in 50 ms bins. Sessions containing approximately 200 trials, along with 2D cursor velocity labels, were used as DS for pre-training, while a separate session without labels was used as DT for finetuning. For few-trial alignment, we used the target ratio r to evaluate the number of target trials from all recorded ones, typically setting r to 0.02, 0.03, 0.04, and 0.06, with 0.02 corresponding to no more than 5 trials. Considering the increased randomness in few-trial selections, we pre-train our FDA using 5 different random seeds and fine-tune it on 25 different random selections of few trials.
Hardware Specification Yes We evaluated the computational efficiency of FDA compared to baselines under identical hardware configurations (NVIDIA Ge Force RTX 3080 Ti, 12GB). The comparison was based on the number of parameters and training time per epoch or in total, covering both pre-training and fine-tuning phases. As shown in Table S10 and Table S11, FDA required less training time compared to ERDiff and No MAD, owing to its efficient training objectives based on short-term context windows. Further analysis of FDA s inference time is presented in Table S12. The average inference time per window is approximately 4 ms, demonstrating its suitability for real-time applications. We further analyzed the inference time of FDA-MLA and FDA-MMD on an NVIDIA Ge Force GTX 1080 Ti (11GB).
Software Dependencies No The paper mentions various software tools and models used for baselines (e.g., CEBRA, ERDiff, No MAD, Cycle-GAN, LSTM) and provides references to their implementations. However, it does not specify version numbers for any core software libraries or frameworks used in the implementation of the FDA method itself (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes The main configurations for model training included the learning rate, weight decay parameters of the Adam optimizer, batch sizes, number of iterative epochs during pre-training and fine-tuning phases. Details of these hyperparameters are provided in Table S3 and Table S4, respectively. Table S3. Detailed Pre-training Setup Learning Rate Weight Decay Epochs Batch Size CO-C 2e-3 1e-5 3500 256 CO-M 2e-3 1e-5 3500 256 RT-M 2e-3 1e-5 3500 256 Table S4. Detailed Fine-tuning Setup Learning Rate Weight Decay Epochs Batch Size CO-C 1e-4 1e-5 25 256 CO-M 1e-4 1e-5 25 256 RT-M 1e-4 1e-5 25 256