Prediction Equilibrium for Dynamic Network Flows
Authors: Lukas Graf, Tobias Harks, Kostas Kollias, Michael Markl
JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we complement our theoretical analysis by an experimental study, in which we systematically compare the induced average travel times of different predictors, including two machine-learning based models trained on data gained from previously computed approximate equilibrium flows, both on synthetic and real world road networks. |
| Researcher Affiliation | Collaboration | Lukas Graf EMAIL University of Passau 94032 Passau, Germany Tobias Harks EMAIL University of Passau 94032 Passau, Germany Kostas Kollias EMAIL Google Mountain View, CA 94043, USA Michael Markl EMAIL University of Passau 94032 Passau, Germany |
| Pseudocode | No | A schematic overview of this algorithm is depicted in Figure 2. [...] A routing phase is run when the δ-routed DPE flow f has been calculated up to some multiple H = k δ of δ. All routes that have been calculated up to time H are invalidated, and new routes are determined. This is done in the following two steps: (R1) Gather predictions ˆqi,e( , H, f) as pw. linear functions for all commodities i I and edges e E. (R2) Compute the set of active outgoing edges δ+ v ˆEi(H, H, f) for all nodes v Vi and commodities i I. In the distribution phase the dynamic flow gets extended. Assume we have computed a δ-routed DPE flow f up to flow horizon H and want to extend H up to some time H > H. To ensure flow conservation, we distribute the node inflow rate b i,v(θ) of v Vi to outgoing edges. A distribution phase consists of the following steps. (D1) Update f with the deterministic flow with respect to the updated inflow rates f+ i,e|[H, ) : b i,v(H) |δ+ v ˆEi(ϑδ(H),ϑδ(H),f)|, if e ˆEi(ϑδ(H), ϑδ(H), f), 0, otherwise, for all commodities i I and edges e = vw E. (D2) Determine the maximal H ϑδ(H)+δ such that b i,v is constant on [H, H ) (according to the updated deterministic flow) for every node v and commodity i I, and set H := H . |
| Open Source Code | Yes | An implementation of this algorithm using Python is publicly available in Graf et al. (2022a). [...] Lukas Graf, Tobias Harks, Kostas Kollias, and Michael Markl. Computation of Dynamic Prediction Equilibria. https://github.com/Arbeitsgruppe Tobias Harks/dynamic-prediction-equilibria/tree/jmlr, 2022a. Accessed: 2022-09-15. |
| Open Datasets | Yes | On the experimental side, we conduct experiments on small synthetic networks, on the Sioux Falls network from Le Blanc et al. (1975), and on a larger real road network of Anaheim obtained from Transportation Networks for Research Core Team (2016). |
| Dataset Splits | Yes | A training/validation split of 90%/10% was used. |
| Hardware Specification | Yes | The CPU times listed were measured on an Intel R Core TM i7-1165G7 @ 2.80GHz while using all four cores. |
| Software Dependencies | No | The Ridge regressor offered by the scikit-learn library (Pedregosa et al., 2011) works well for our uses. [...] An Adam optimizer minimizes the mean absolute error as the loss function. |
| Experiment Setup | Yes | The network inflow rates vanish at time h = 12 and the computation horizon is set to Hcomp = 60 which is large enough for all particles to reach the sink. We set the reroute interval to δr = 1/8. The predictors ˆq L and ˆq RL use a prediction horizon of H = 20, and ˆq RL has a regularization window of δ = 1. The ML predictors ˆq ML have a step size of δ = 1 with kp = 20 past and kf = 20 future samples. [...] The neural network consists of 4 densely connected layers. The first three layers have the same size as the input. The Leaky Re LU function acts as the activation function between the dense layers reducing the vanishing gradient problem. An Adam optimizer minimizes the mean absolute error as the loss function. Moreover, we apply an L2 regularization term to the weights and biases of each layer; this helps to reduce over-fitting. |