Complexity of Representation and Inference in Compositional Models with Part Sharing
Authors: Alan Yuille, Roozbeh Mottaghi
JMLR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper performs a complexity analysis of a class of serial and parallel compositional models of multiple objects and shows that they enable efficient representation and rapid inference. The second contribution is an analysis of the complexity of compositional models in terms of computation time (for serial computers) and numbers of nodes (e.g., neurons ) for parallel computers. |
| Researcher Affiliation | Collaboration | Alan Yuille EMAIL Departments of Cognitive Science and Computer Science Johns Hopkins University Baltimore, MD 21218 Roozbeh Mottaghi EMAIL Allen Institute for Artificial Intelligence Seattle WA 98103, USA |
| Pseudocode | No | The paper describes inference algorithms using mathematical equations and textual explanations (e.g., equations 8 and 10 for bottom-up and top-down passes) and diagrams (Figure 11 for parallel implementation), but does not contain explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper discusses theoretical complexity analysis and models but does not provide any explicit statements about releasing source code, nor does it include links to a code repository or mention code in supplementary materials. |
| Open Datasets | No | The paper refers to 'empirical growth regime using the dictionaries obtained by the compositional learning experiments reported in Zhu et al. (2010)' but does not specify which datasets were used in those experiments nor does it provide any access information (links, citations, or names of well-known public datasets) for any data relevant to its own analysis. |
| Dataset Splits | No | The paper focuses on theoretical complexity analysis and does not describe any experiments that would require specifying training, test, or validation dataset splits. |
| Hardware Specification | No | The paper discusses abstract computational complexities for 'serial computers' and 'parallel computers' and mentions 'neurons' in a conceptual context, but it does not specify any concrete hardware details (like specific CPU or GPU models, or cloud computing resources) used for running experiments. |
| Software Dependencies | No | The paper focuses on a theoretical complexity analysis and does not describe any specific software dependencies or version numbers required to replicate its findings. |
| Experiment Setup | No | The paper presents a theoretical analysis of complexity and does not describe any experimental setup details, hyperparameters, or system-level training settings. |