Influence Learning in Complex Systems

Authors: Elena Congeduti, Roberto Rocchetta, Frans A Oliehoek

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we take steps towards addressing this question by conducting an extensive empirical study of learning models for influence approximations in various realistic domains, and evaluating how these models generalize over long horizons.
Researcher Affiliation Academia Elena Congeduti EMAIL Department of Computer Science Delft University of Technology the Netherlands Roberto Rocchetta EMAIL Intelligent Energy System Group University of Applied Sciences and Arts of Southern Switzerland Switzerland Frans A. Oliehoek EMAIL Department of Computer Science Delft University of Technology the Netherlands
Pseudocode No The paper describes algorithms and models (e.g., ADAM optimization algorithm, recurrent and temporal convolutional neural networks) but does not present any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about providing source code for the methodology described, nor does it provide a direct link to a code repository. It mentions 'Citylearn' but this is a third-party tool used, not the authors' own implementation code.
Open Datasets Yes For Microgrid (MG): "The former is modeled using hourly solar radiation data in https://openweathermap.org/api/solar-radiation and the photovoltaic power generation model introduced by Skoplaki & Palyvos (2009). The wind power generation is calculated by transforming the kinetic energy of the wind speed modelled via the Markov chain model in (Shamshad et al., 2005)"
Dataset Splits Yes We collect n = 500 trajectories of influence sources and d-sets from the global simulator to form the training set Dh. The model performance is assessed on an a test set consisting of ntest independent global model trajectories. ... Table 4: Optimization hyperparameters for the learning models...Valid Split90% ... Table 9: Optimization choices for long horizon tasks. Train size n 500 Test size m 100
Hardware Specification No The paper discusses computational effort and simulations but does not specify any hardware details like GPU models, CPU types, or memory used for experiments.
Software Dependencies No The paper mentions the use of the ADAM optimization algorithm and discusses different classes of models (LSTM, GRU, TCN, Fully Conv, Logistic Regression). However, it does not specify any software libraries or frameworks with version numbers (e.g., Python, PyTorch, TensorFlow, scikit-learn versions).
Experiment Setup Yes We adopt standard optimization techniques, including the ADAM optimization algorithm, linear decay of the learning rate and grid search over the space of initial and final learning rates. For each scenario, a fixed number of epochs and the batch size are selected. Details on the scenarios and hyperparameter configurations are provided in Table 3 and Table 4 in Appendix D, respectively.