Link Prediction in Graphs with Autoregressive Features
Authors: Emile Richard, Stéphane Gaïffas, Nicolas Vayatis
JMLR 2014 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 4 we provide an efficient algorithm for solving the optimization problem and show empirical results that illustrate our approach. In Section 5.1 we assess our algorithms on synthetic data, generated as described in Section 4.2. In Section 5.2 we use our algorithm for the prediction of sales volume for webmarketing data. We report empirical results averaged over 50 runs with confidence intervals in Figure 2. |
| Researcher Affiliation | Academia | Emile Richard EMAIL Department of Electrical Engineering Stanford University Packard 239 Stanford, CA 94304 Stéphane Gaïffas EMAIL CMAP Ecole Polytechnique Route de Saclay 91128 Palaiseau Cedex, France Nicolas Vayatis EMAIL CMLA ENS Cachan UMR CNRS No. 8536 61, avenue du Président Wilson 94 235 Cachan cedex, France |
| Pseudocode | Yes | Algorithm 1 Incremental Proximal-Gradient to Minimize L Initialize A, Z1, Z2, W repeat Compute (GA, GW ) = A,W ℓ(A, W). Compute Z = proxθγ 1(A θGA) Compute A = proxθτ (Z) Set W = proxθκ 1(W θGW ) until convergence return (A, W) minimizing L |
| Open Source Code | No | The paper does not contain any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | We performed our experiments on the sales volumes time series of the n = 200 top sold books over T = 25 consecutive weeks excluding the Christmas period in 2009 of 31972 users.1 The data was provided by the company 1000mercis. |
| Dataset Splits | Yes | For choosing the tuning parameters κ, τ, γ we use the data collected from the same market a year before the test set to form the training and validation sets. For testing the quality of our predictor, we used the parameters performing the best predictions over the validation set. On the other hand, as seasonality effects may harm the method if cross-validation is performed on data taken from a different period of the year, this is the best way to proceed for splitting the data onto training validation and test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not explicitly mention specific software dependencies with version numbers. |
| Experiment Setup | Yes | In our experiments (see Section 5 below), we consider and compare both first order and second order VAR models. The parameters τ and γ are chosen by a 10-fold cross validation for each of the methods separately. Table 2: Relative quadratic error of the prediction of sales volume for three regularized VAR models: one based on ridge regression penalty, one base don LASSO penalty, and one based on our strategy with both sparse and low-rank regularizers. |