Predicting a Switching Sequence of Graph Labelings

Authors: Mark Herbster, Stephen Pasteris, Massimiliano Pontil

JMLR 2015 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We propose a polynomial time algorithm for online prediction in this setting and derive a mistake bound for the algorithm. The bound is controlled by the geometric cut of the observed and latent labelings, as well as the resistance diameters of the graphs. When specialized to multitask prediction and online switching problems the bound gives new and sharper results under certain conditions.
Researcher Affiliation Academia Mark Herbster EMAIL Stephen Pasteris EMAIL Massimiliano Pontil EMAIL Department of Computer Science University College London Gower Street, London WC1E 6BT, UK
Pseudocode Yes Algorithm 1 Input: An n-vertex graph G and p-vertex graph H. Parameters: K, ˆθ, η. Initialization: W 0 I K(n+p), where I is the (n + p) (n + p) identity matrix. For t = 1, . . . , T Get pair of vertices it, jt V(G) V(H). Define the matrix Xt := 1 2xtx T t , with xt given by Equation (4). ( 1 if Tr (W t 1Xt) K+1 1 otherwise. Receive label yt { 1, 1} and if ˆyt = yt update W t exp (log (W t 1) + η(yt ˆyt)Xt) . (2)
Open Source Code No In this paper we are primarily concerned with the mistake bound and postpone further discussions on large scale implementations of the algorithm to a future occasion.
Open Datasets No We are given two undirected graphs, an n-vertex graph G and a p-vertex graph H. We let V(G) and V(H) be the set of vertices in G and H, respectively, and let LG and LH be the corresponding graph Laplacians.
Dataset Splits No The paper is theoretical and does not describe experiments using specific datasets, thus no dataset splits are mentioned.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or hardware used for running experiments.
Software Dependencies No The paper is theoretical and does not mention any software implementation details or specific version numbers for dependencies.
Experiment Setup No The algorithm then depends on two input parameters, K > 1 and ˆθ. The first parameter is the number labelings of the observed graph, which then determines the learning rate η. The second parameter ˆθ is a scaled threshold for the linear classifier.