Exploiting Contextual Target Attributes for Target Sentiment Classification

Authors: Bowen Xing, Ivor W. Tsang

JAIR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on three benchmark datasets demonstrate the superiority of our model, which achieves new state-of-the-art performance.
Researcher Affiliation Academia Bowen Xing EMAIL Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing 100083, China School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China Ivor W. Tsang EMAIL CFAR, Agency for Science, Technology and Research, 138632, Singapore IHPC, Agency for Science, Technology and Research, 138632, Singapore School of Computer Science and Engineering, Nanyang Technological University, 639798, Singapore Australian Artificial Intelligence Institute, University of Technology Sydney Ultimo, NSW 2007, Australia
Pseudocode No The paper describes the methodology using natural language and mathematical equations but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code for the described methodology, nor does it include a link to a code repository. It only mentions the use of 'off-the-shelf dependency parser from the spa Cy toolkit' with a link to spaCy's general website, which is a third-party tool.
Open Datasets Yes We conduct experiments on the Restaurant14, Laptop14 and Restaurant15 datasets (Pontiki, Galanis, Pavlopoulos, Papageorgiou, Androutsopoulos, & Manandhar, 2014; Pontiki, Galanis, Papageorgiou, Manandhar, & Androutsopoulos, 2015), which are widely adopted test beds for the TSC task.
Dataset Splits Yes Table 2: Dataset statistics. Dataset Positive Neutral Negative Train Test Train Test Train Test Laptop14 994 341 464 169 870 128 Restaurant14 2164 728 637 196 807 196 Restaurant15 912 326 36 34 256 182
Hardware Specification No The paper describes the BERT model configuration (e.g., 'Layer number is 12; hidden dimension is 768; attention head number is 12; total parameter number is 110M') but does not provide any specific details about the hardware (e.g., GPU, CPU models, or memory) used to run the experiments.
Software Dependencies No The paper mentions 'BERTbase uncased version', 'Adam W optimizer', and 'spa Cy toolkit'. However, it does not provide specific version numbers for the spaCy toolkit or other software libraries (e.g., PyTorch, TensorFlow) beyond the BERT model version, which is insufficient to meet the criteria for reproducible software dependencies.
Experiment Setup Yes Table 3 lists the details of the hyper-parameters. Learning Rate 1e-5 Dropout Rate 0.3 Weight Decay Coefficient 0.05 Layer number of HIG2CN 3 Dimension of Hidden State 768 Coefficient α 0.5 Coefficient β 0.5