Bayesian Regression Markets
Authors: Thomas Falconer, Jalal Kazempour, Pierre Pinson
JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Section 4 and Section 5 illustrate our findings through a set of simulation-based and real-world case studies, respectively. |
| Researcher Affiliation | Academia | Thomas Falconer EMAIL Department of Wind and Energy Systems Technical University of Denmark Kgs. Lyngby, 2800, Denmark Jalal Kazempour EMAIL Department of Wind and Energy Systems Technical University of Denmark Kgs. Lyngby, 2800, Denmark Pierre Pinson EMAIL Dyson School of Design Engineering Imperial College London London, SW7 2DB, United Kingdom |
| Pseudocode | Yes | Algorithm 1: In-sample online regression market |
| Open Source Code | Yes | Our code is publicly available at: https://github.com/tdfalc/regression-markets |
| Open Datasets | Yes | We make use of an open source dataset, namely the Pan-European Climate Database, as detailed in Koivisto and Leon (2022). This dataset consists of hourly average irradiance values for European countries, obtained by simulating the output from south-facing solar photovoltaic (PV) modules across several intra-country regions. Although this data is not exactly real, it effectively captures the spatio-temporal aspects of solar irradiance across the continent, with the benefit of not being contaminated with spurious data points, as can often be the case with real-world datasets. [...] URL https://data.dtu.dk/articles/dataset/Solar_PV_generation_ time_series_PECD_2021_update_/19727239. |
| Dataset Splits | Yes | We extract data that spans a two-year period from the start of 2018 to the end of 2019, with an hourly resolution. Suppose that each of the six countries takes turn in assuming the role of the central agent in parallel transactions. [...] We consider an online setting such that over the entire two-year period, at each time step (i.e., one hour interval), when a new observation of the target signal is collected, the forecast issued at the previous time step is used for out-of-sample market clearing, whilst at the same time, the posterior is updated and the in-sample market is cleared, and a forecast for the next time step is subsequently made. |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, memory, or cloud instances) are mentioned in the paper for running experiments. The paper only refers to a 'platform capable of handling both the analytical (e.g., parameter inference) and market-based (e.g., revenue allocation) components together in tandem' without further specification. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python, PyTorch, scikit-learn, etc.) are mentioned in the paper. |
| Experiment Setup | Yes | In each of the simulation-based case studies, the central agent seeks to model a target variable Yt using their own feature x1,t and the relevant features available in the market, each owned by a unique support agent, namely x2,t and x3,t. The likelihood is an independent Gaussian stochastic process with finite precision ξYt. The linear interpolant for the grand coalition is f(xt, w) = w0 + w1x1,t + w2x2,t + w3x3,t, t. [...] We assume the true coefficients to be w = [ 0.11, 0.31, 0.08, 0.65] , and the noise precision to be constant for all time steps, treated as a hyperparameter with ξYt = 3.31, t. We further set the valuation of the central agent to λ = 0.01 EUR per time step and per unit improvement in ℓ. [...] We set τ = 0.998 and assume the valuation of each central agent to be λ = 50 EUR and λ = 150 EUR per time step and per unit improvement in ℓfor the in-sample and out-of-sample stages, respectively, to reflect costs of balancing. |