Event Representations With Tensor-Based Compositions
Authors: Noah Weber, Niranjan Balasubramanian, Nathanael Chambers
AAAI 2018 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our proposed tensor models on a variety of event related tasks, comparing against a compositional neural network model, a simple multiplicative model, and an averaging baseline. We use the New York Times Gigaword Corpus for training data. |
| Researcher Affiliation | Academia | Noah Weber Stony Brook University Stony Brook, New York, USA EMAIL Niranjan Balasubramanian Stony Brook University Stony Brook, New York, USA EMAIL Nathanael Chambers United States Naval Academy Annapolis, Maryland, USA EMAIL |
| Pseudocode | No | The paper describes algorithms in prose and provides mathematical equations, but it does not contain a dedicated 'Pseudocode' or 'Algorithm' block with structured, code-like formatting. |
| Open Source Code | Yes | We make all code and data publicly available.3 github.com/stonybrooknlp/event-tensors |
| Open Datasets | Yes | We use the New York Times Gigaword Corpus for training data. The transitive sentence similarity dataset (Kartsaklis and Sadrzadeh 2014a) contains 108 pairs of transitive sentences. |
| Dataset Splits | No | We hold out 4000 articles from the corpus to construct dev sets for hyperparameter tuning, and 6000 articles for test purposes. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running its experiments. |
| Software Dependencies | No | The paper mentions software tools like Ollie and GloVe, and optimizers like Adagrad, but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We initialize the word embedding layer with 100 dimensional pretrained Glo Ve vectors (Pennington, Socher, and Manning 2014)... Training was done using Adagrad (Duchi, Hazan, and Singer 2011) with a learning rate of 0.01 and a minibatch size of 128. |