Factor Learning Portfolio Optimization Informed by Continuous-Time Finance Models

Authors: Sinong Geng, Houssam Nassif, Zhaobin Kuang, Anders Max Reppen, K. Ronnie Sircar

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On both synthetic and real-world portfolio optimization tasks, we observe that Fa LPO outperforms five leading methods. Finally, we show that Fa LPO can be extended to other decision-making problems with stochastic factors.
Researcher Affiliation Collaboration Sinong Geng Princeton University; Houssam Nassif Meta; Zhaobin Kuang Stanford University; Anders Max Reppen Boston University; Ronnie Sircar Princeton University
Pseudocode Yes Algorithm 1 Fa LPO Algorithm
Open Source Code No The paper does not provide an explicit statement or link to its own source code. It mentions 'wandb' (Biewald, 2020) for hyperparameter tuning, which is a third-party tool, not the authors' implementation.
Open Datasets Yes We simulate environments with the Kim Omberg model and implement the considered methods to compare their performance. ... For factors, we follow existing works (Aboussalah et al., 2022; De Prado, 2018; Dixon et al., 2020) and consider economic indexes, technical analysis indexes, and sector-specific factors such as oil prices, gold prices, and related ETF prices, leading to around 30 factors for each sector. In each sector we select 10 stocks according to the availability and trading volume in the considered time range (Appendix L.1).
Dataset Splits Yes We consider 21-day trading, and generate 1000 trajectories with 21 observations for training, 1000 for validation, and 1000 for testing. ... The training, validation, and testing data are constructed using rolling windows (Appendix L.3). ... Train Size {1260} Validation Size {63} Test Size {63}
Hardware Specification Yes Compute Resources AWS ec2 m5ad.24xlarge ... All experiments are conducted on AWS EC2 instances of type m5ad.24xlarge, using CPUs only.
Software Dependencies No The paper mentions 'python package TA' and 'wandb', but does not provide specific version numbers for these or other key software components used in their methodology.
Experiment Setup Yes For each method, we tune the learning rate and other method-specific hyperparameters with early stopping (Appendix K.3). ... The considered hyperparameters include the learning rate, λ, and batch size. ... Table 5 reports the hyperparameter values. ... Table 12: Hyperparameters for real-world experiments