A Tensor Approach to Learning Mixed Membership Community Models
Authors: Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade
JMLR 2014 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | While the results of this paper are mostly limited to a theoretical analysis of the tensor method for learning overlapping communities, we note recent results which show that this method (with improvements and modifications) is very accurate in practice on real datasets from social networks, and is scalable to graphs with millions of nodes (Huang et al., 2013). |
| Researcher Affiliation | Collaboration | Animashree Anandkumar EMAIL Department of Electrical Engineering & Computer Science, University of California Irvine, Irvine, CA 92697, USA; Rong Ge EMAIL Microsoft Research One Memorial Drive, Cambridge MA 02142, USA; Daniel Hsu EMAIL Department of Computer Science, Columbia University 116th Street and Broadway, New York, NY 10027, USA; Sham M. Kakade EMAIL Microsoft Research One Memorial Drive, Cambridge MA 02142, USA |
| Pseudocode | Yes | Algorithm 1 {ˆΠ, ˆP, bα} Learn Mixed Membership(G, k, α0, N, τ); Procedure 1 {ˆΠ, bα} Learn Partition Community(Gα0 X,A, Gα0 X,B, Gα0 X,C, Tα0 Y {A,B,C}, G, N, τ); Procedure 2 {λ, Φ} Tensor Eigen(T, {vi}i [L], N); Procedure 3 { ˆS} Support Recovery Homophilic Models(G, k, α0, ξ, ˆΠ) |
| Open Source Code | No | The paper primarily presents a theoretical analysis of a new method. While it references 'recent experimental results' and 'a subsequent work' (Huang et al., 2013) that deploy and implement the tensor method, it does not state that *this paper* provides source code for its described methodology. |
| Open Datasets | No | The paper is theoretical, focusing on mathematical derivations, algorithms, and proofs. It does not describe or refer to any specific datasets used in empirical experiments, nor does it provide any access information for such datasets. |
| Dataset Splits | No | The paper is theoretical and does not present empirical experiments. Therefore, it does not discuss dataset splits (e.g., training, validation, test) or their methodologies. |
| Hardware Specification | No | The paper is theoretical and focuses on algorithm design and analysis. It mentions computational complexity in terms of 'serial computation model' and 'parallel computation model' but does not specify any actual hardware (e.g., CPU, GPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper is theoretical and does not describe experimental implementations. Consequently, it does not specify any software dependencies or their version numbers that would be required for replication. |
| Experiment Setup | No | The paper is theoretical, outlining a new learning approach and its mathematical guarantees. It does not contain experimental results, thus it provides no details on experimental setup, hyperparameters, or training configurations. |