Optimising Spatial Teamwork Under Uncertainty
Authors: Gregory Everett, Ryan J. Beal, Tim Matthews, Timothy J. Norman, Sarvapali D. Ramchurn
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | When applied to team defence in football (soccer) using real-world data, our approach reduces opponent threat by 21%, outperforming optimised individual behaviour by 6%. Additionally, our model enhances the predictive accuracy of future attack locations and provides deeper insights compared to existing teamwork models that do not explicitly consider the spatial dynamics of teamwork. |
| Researcher Affiliation | Collaboration | 1School of Electronics and Computer Science, University of Southampton, Southampton, United Kingdom 2Sentient Sports, United Kingdom |
| Pseudocode | Yes | MCTS Algorithm 1) Selection Select the most promising action from the root using UCB1 (Auer, Cesa-Bianchi, and Fischer 2002) until an unexplored or terminal node is reached. Each node, Ns, with state s, leads to a child node with state s , determined by action a and transition function Γ. 2) Expansion Expand a node Ns by randomly selecting an unexplored action. 3) Simulation At leaf node Ns, approximate V(s) using cumulative reward in an MMDP simulation until a terminal state is reached. 4) Backpropagation Backpropagate the value of the new child node Ns up the tree to the root. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code for the described methodology or a link to a code repository. |
| Open Datasets | No | We compute results for all events (e.g., passes and shots) in a 34-game real-world football dataset in the K League 1 supplied to us by Be Pro Group Ltd. |
| Dataset Splits | No | We compute results for all events (e.g., passes and shots) in a 34-game real-world football dataset in the K League 1 supplied to us by Be Pro Group Ltd. The paper does not specify any training, validation, or test splits for this dataset. |
| Hardware Specification | No | The authors acknowledge the use of the IRIDIS High Performance Computing Facility, and associated support services at the University of Southampton, in the completion of this work. |
| Software Dependencies | No | This deep-learning model, combining convolutional and graph neural networks, achieves the lowest mean Euclidean error (2.31m) compared to the same baselines in (Everett et al. 2023) such as XGBoost (2.47m), graph neural network (2.56m), and a simple spline (5.03m). |
| Experiment Setup | Yes | To improve MCTS convergence speed, we parallelise MCTS expansion and simulation. Unlike traditional leaf parallelization (Cazenave and Jouandeau 2007), our approach uses a single thread and instead uses parallel tree nodes to batch process state transitions using Γ. At node Ns, we expand a random action and perform M transitions (100 in this paper) to reach M leaf nodes Ns = {Ns0, ..., Ns M }. ... We set a maximum speed of 5 m/s, which is extracted from Spearman (2018). |