Self-Supervised Collaborative Information Bottleneck for Text Readability Assessment

Authors: Jinshan Zeng, Xianglong Yu, Xianchao Tong, Wenyan Xiao

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on four English and two Chinese corpora to demonstrate the effectiveness of the proposed model. Experimental results show that the proposed model outperforms state-of-the-art models in terms of four important evaluation metrics, and the suggested SCIB module can effectively capture the specificand common-intrinsic information. Experiments In this section, a series of comparative experiments were conducted on four English and two Chinese corpora to compare the effectiveness of the proposed model with existing state-of-the-art models. We also conducted ablation experiments on various components of the proposed model.
Researcher Affiliation Academia 1Jiangxi Normal University 2Jiangxi University of Science and Technology EMAIL, EMAIL, EMAIL,EMAIL
Pseudocode No The paper describes methods and uses equations but does not contain a clearly labeled pseudocode or algorithm block.
Open Source Code No The paper does not provide an explicit statement about releasing its own source code or a link to a repository. It mentions for baselines: "since we cannot access their reproducible codes."
Open Datasets Yes We evaluated the proposed model through experiments over four English corpora (i.e., Wee Bit (Vajjala and Meurers 2012), Cambridge1, Newsela2 and CLEAR), and two Chinese corpora (i.e., CMT (Lee, Liu, and Cai 2020; Zeng et al. 2022) and CMER). 1http://www.cambridgeenglish.org 2https://newsela.com
Dataset Splits Yes For each corpus, we split the data into training, validation, and test sets in a ratio of 8:1:1, and reported the average results of three trails.
Hardware Specification Yes All experiments were implemented on RTX 3090 and A40 GPUs, and in the PyTorch framework.
Software Dependencies No The paper mentions "PyTorch framework" and "AdamW (Loshchilov and Hutter 2017) as the optimizer" but does not specify any software libraries with version numbers.
Experiment Setup Yes For the proposed model, we used AdamW (Loshchilov and Hutter 2017) as the optimizer with a weight decay parameter of 0.01 and a warmup ratio of 0.1.